Sample records for order reliability method

  1. Universal first-order reliability concept applied to semistatic structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  2. Universal first-order reliability concept applied to semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-07-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  3. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  4. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  5. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  6. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  7. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

  8. On Some Methods in Safety Evaluation in Geotechnics

    NASA Astrophysics Data System (ADS)

    Puła, Wojciech; Zaskórski, Łukasz

    2015-06-01

    The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.

  9. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  10. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  11. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  12. Mechanical system reliability for long life space systems

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1994-01-01

    The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.

  13. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  14. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  15. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  16. A study on reliability of power customer in distribution network

    NASA Astrophysics Data System (ADS)

    Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin

    2017-05-01

    The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.

  17. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  18. Interrater Reliability in Large-Scale Assessments--Can Teachers Score National Tests Reliably without External Controls?

    ERIC Educational Resources Information Center

    Pantzare, Anna Lind

    2015-01-01

    In most large-scale assessment systems a set of rather expensive external quality controls are implemented in order to guarantee the quality of interrater reliability. This study empirically examines if teachers' ratings of national tests in mathematics can be reliable without using monitoring, training, or other methods of external quality…

  19. Economic valuation by the method of paired comparison, with emphasis on evaluation of the transitivity axiom

    Treesearch

    George L. Peterson; Thomas C. Brown

    1998-01-01

    The paired comparison (PC) method is used to investigate reliability, transitivity, and decision time for binary choices among goods and sums of money. The PC method reveals inconsistent choices and yields individual preference order over the set of items being compared. The data reported support the transitivity assumption and demonstrate high reliability for...

  20. Sarma-based key-group method for rock slope reliability analyses

    NASA Astrophysics Data System (ADS)

    Yarahmadi Bafghi, A. R.; Verdel, T.

    2005-08-01

    The methods used in conducting static stability analyses have remained pertinent to this day for reasons of both simplicity and speed of execution. The most well-known of these methods for purposes of stability analysis of fractured rock masses is the key-block method (KBM).This paper proposes an extension to the KBM, called the key-group method (KGM), which combines not only individual key-blocks but also groups of collapsable blocks into an iterative and progressive analysis of the stability of discontinuous rock slopes. To take intra-group forces into account, the Sarma method has been implemented within the KGM in order to generate a Sarma-based KGM, abbreviated SKGM. We will discuss herein the hypothesis behind this new method, details regarding its implementation, and validation through comparison with results obtained from the distinct element method.Furthermore, as an alternative to deterministic methods, reliability analyses or probabilistic analyses have been proposed to take account of the uncertainty in analytical parameters and models. The FOSM and ASM probabilistic methods could be implemented within the KGM and SKGM framework in order to take account of the uncertainty due to physical and mechanical data (density, cohesion and angle of friction). We will then show how such reliability analyses can be introduced into SKGM to give rise to the probabilistic SKGM (PSKGM) and how it can be used for rock slope reliability analyses. Copyright

  1. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  2. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  3. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Ning, Chao-lie; Li, Bing

    2017-03-01

    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  4. Approximating genomic reliabilities for national genomic evaluation

    USDA-ARS?s Scientific Manuscript database

    With the introduction of standard methods for approximating effective daughter/data contribution by Interbull in 2001, conventional EDC or reliabilities contributed by daughter phenotypes are directly comparable across countries and used in routine conventional evaluations. In order to make publishe...

  5. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    PubMed

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  6. Reliability of Hypernasality Rating: Comparison of 3 Different Methods for Perceptual Assessment.

    PubMed

    Yamashita, Renata Paciello; Borg, Elisabet; Granqvist, Svante; Lohmander, Anette

    2018-01-01

    To compare reliability in auditory-perceptual assessment of hypernasality for 3 different methods and to explore the influence of language background. Comparative methodological study. Participants and Materials: Audio recordings of 5-year-old Swedish-speaking children with repaired cleft lip and palate consisting of 73 stimuli of 9 nonnasal single-word strings in 3 different randomized orders. Four experienced speech-language pathologists (2 native speakers of Brazilian-Portuguese and 2 native speakers of Swedish) participated as listeners. After individual training, each listener performed the hypernasality rating task. Each order of stimuli was analyzed individually using the 2-step, VISOR and Borg centiMax scale methods. Comparison of intra- and inter-rater reliability, and consistency  for each method within language of the listener and between listener languages (Swedish and Brazilian-Portuguese). Good to excellent intra-rater reliability was found within each listener for all methods, 2-step: κ = 0.59-0.93; VISOR: intraclass correlation coefficient (ICC) = 0.80-0.99; Borg centiMax (cM) scale: ICC = 0.80-1.00. The highest inter-rater reliability was demonstrated for VISOR (ICC = 0.60-0.90) and Borg cM-scale (ICC = 0.40-0.80). High consistency within each method was found with the highest for the Borg cM scale (ICC = 0.89-0.91). There was a significant difference in the ratings between the Swedish and the Brazilian listeners for all methods. The category-ratio scale Borg cM was considered most reliable in the assessment of hypernasality. Language background of Brazilian-Portuguese listeners influenced the perceptual ratings of hypernasality in Swedish speech samples, despite their experience in perceptual assessment of cleft palate speech disorders.

  7. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  8. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  9. Microcircuit Reliability Bibliography. Volume 3. 1975 Annual Reference Supplement (Document Numbers 10417-11044)

    DTIC Science & Technology

    1975-04-01

    to 2-16 Category 3: Fabrie’ition Methods and Techniques 3-01 to 3-21 Category 4: ReliabiLity Studies 4-01 to 4-15 Category 5: C,,rputeiized Analysis...RAC icrodrcuit Thesaurus. The ternis are arranged in alphabetical order with sub-term description followinti each main term. Cosvreferencing is...Reliability aspects of vrocircuit manufacturi’. 4. Reliability Studies : Technics) reports !:datig to ;ormal ve isbbty studies and investi- sations

  10. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  11. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  12. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  13. Overview of RICOR's reliability theoretical analysis, accelerated life demonstration test results and verification by field data

    NASA Astrophysics Data System (ADS)

    Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey

    2018-05-01

    The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.

  14. Direct nuclear reaction experiments for stellar nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Cherubini, S.

    2017-09-01

    During the last two decades indirect methods where proposed and used in many experiments in order to measure nuclear cross sections between charged particles at stellar energies. These are among the lowest to be measured in nuclear physics. One of these methods, the Trojan Horse method, is based on the Quasi-Free reaction mechanism and has proved to be particularly flexible and reliable. It allowed for the measurement of the cross sections of various reactions of astrophysical interest using stable beams. The use and reliability of indirect methods become even more important when reactions induced by Radioactive Ion Beams are considered, given the much lower intensity generally available for these beams. The first Trojan Horse measurement of a process involving the use of a Radioactive Ion Beam dealt with the ^{18} F(p, α ^{15} O process in Nova conditions. To obtain pieces of information on this process, in particular about its cross section at Nova energies, the Trojan Horse method was applied to the ^{18} F(d, α ^{15} O)n three body reaction. In order to establish the reliability of the Trojan Horse method approach, the Treiman-Yang criterion is an important test and it will be addressed briefly in this paper.

  15. An Investment Level Decision Method to Secure Long-term Reliability

    NASA Astrophysics Data System (ADS)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  16. Mission Reliability Estimation for Repairable Robot Teams

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.

  17. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  18. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  19. Pneumothorax size measurements on digital chest radiographs: Intra- and inter- rater reliability.

    PubMed

    Thelle, Andreas; Gjerdevik, Miriam; Grydeland, Thomas; Skorge, Trude D; Wentzel-Larsen, Tore; Bakke, Per S

    2015-10-01

    Detailed and reliable methods may be important for discussions on the importance of pneumothorax size in clinical decision-making. Rhea's method is widely used to estimate pneumothorax size in percent based on chest X-rays (CXRs) from three measure points. Choi's addendum is used for anterioposterior projections. The aim of this study was to examine the intrarater and interrater reliability of the Rhea and Choi method using digital CXR in the ward based PACS monitors. Three physicians examined a retrospective series of 80 digital CXRs showing pneumothorax, using Rhea and Choi's method, then repeated in a random order two weeks later. We used the analysis of variance technique by Eliasziw et al. to assess the intrarater and interrater reliability in altogether 480 estimations of pneumothorax size. Estimated pneumothorax sizes ranged between 5% and 100%. The intrarater reliability coefficient was 0.98 (95% one-sided lower-limit confidence interval C 0.96), and the interrater reliability coefficient was 0.95 (95% one-sided lower-limit confidence interval 0.93). This study has shown that the Rhea and Choi method for calculating pneumothorax size has high intrarater and interrater reliability. These results are valid across gender, side of pneumothorax and whether the patient is diagnosed with primary or secondary pneumothorax. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  1. Numerical-analytic implementation of the higher-order canonical Van Vleck perturbation theory for the interpretation of medium-sized molecule vibrational spectra.

    PubMed

    Krasnoshchekov, Sergey V; Isayeva, Elena V; Stepanov, Nikolay F

    2012-04-12

    Anharmonic vibrational states of semirigid polyatomic molecules are often studied using the second-order vibrational perturbation theory (VPT2). For efficient higher-order analysis, an approach based on the canonical Van Vleck perturbation theory (CVPT), the Watson Hamiltonian and operators of creation and annihilation of vibrational quanta is employed. This method allows analysis of the convergence of perturbation theory and solves a number of theoretical problems of VPT2, e.g., yields anharmonic constants y(ijk), z(ijkl), and allows the reliable evaluation of vibrational IR and Raman anharmonic intensities in the presence of resonances. Darling-Dennison and higher-order resonance coupling coefficients can be reliably evaluated as well. The method is illustrated on classic molecules: water and formaldehyde. A number of theoretical conclusions results, including the necessity of using sextic force field in the fourth order (CVPT4) and the nearly vanishing CVPT4 contributions for bending and wagging modes. The coefficients of perturbative Dunham-type Hamiltonians in high-orders of CVPT are found to conform to the rules of equality at different orders as earlier proven analytically for diatomic molecules. The method can serve as a good substitution of the more traditional VPT2.

  2. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  3. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  4. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  5. The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability

    NASA Astrophysics Data System (ADS)

    Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing

    2018-01-01

    Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.

  6. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  7. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  8. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  9. The Effect of Observation Length and Presentation Order on the Reliability and Validity of an Observational Measure of Teaching Quality

    ERIC Educational Resources Information Center

    Mashburn, Andrew J.; Meyer, J. Patrick; Allen, Joseph P.; Pianta, Robert C.

    2014-01-01

    Observational methods are increasingly being used in classrooms to evaluate the quality of teaching. Operational procedures for observing teachers are somewhat arbitrary in existing measures and vary across different instruments. To study the effect of different observation procedures on score reliability and validity, we conducted an experimental…

  10. Acromiohumeral distance measurement in rotator cuff tendinopathy: is there a reliable, clinically applicable method? A systematic review.

    PubMed

    McCreesh, Karen M; Crotty, James M; Lewis, Jeremy S

    2015-03-01

    Narrowing of the subacromial space has been noted as a common feature of rotator cuff (RC) tendinopathy. It has been implicated in the development of symptoms and forms the basis for some surgical and rehabilitation approaches. Various radiological methods have been used to measure the subacromial space, which is represented by a two-dimensional measurement of acromiohumeral distance (AHD). A reliable method of measurement could be used to assess the impact of rehabilitation or surgical interventions for RC tendinopathy; however, there are no published reviews assessing the reliability of AHD measurement. The aim of this review was to systematically assess the evidence for the intrarater and inter-rater reliability of radiological methods of measuring AHD, in order to identify the most reliable method for use in RC tendinopathy. An electronic literature search was carried out and studies describing the reliability of any radiological method of measuring AHD in either healthy or RC tendinopathy groups were included. Eighteen studies met the inclusion criteria and were appraised by two reviewers using the Quality Appraisal for reliability Studies checklist. Eight studies were deemed to be of high methodological quality. Study weaknesses included lack of tester blinding, inadequate description of tester experience, lack of inclusion of symptomatic populations, poor reporting of statistical methods and unclear diagnosis. There was strong evidence for the reliability of ultrasound for measuring AHD, with moderate evidence for MRI and CT measures and conflicting evidence for radiographic methods. Overall, there was lack of research in RC tendinopathy populations, with only six studies including participants with shoulder pain. The results support the reliability of ultrasound and CT or MRI for the measurement of AHD; however, more studies in symptomatic populations are required. The reliability of AHD measurement using radiographs has not been supported by the studies reviewed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Method of ultrasonic measurement of texture

    DOEpatents

    Thompson, R. Bruce; Smith, John F.; Lee, Seung S.; Li, Yan

    1993-10-12

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet.

  12. Method of ultrasonic measurement of texture

    DOEpatents

    Thompson, R.B.; Smith, J.F.; Lee, S.S.; Taejon Ch'ungmam; Yan Li.

    1993-10-12

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figures.

  13. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  14. Reliability analysis of hydrologic containment of liquefied petroleum gas within unlined rock caverns.

    NASA Astrophysics Data System (ADS)

    Gao, X.; Yan, E. C.; Yeh, T. C. J.; Wang, Y.; Liang, Y.; Hao, Y.

    2017-12-01

    Notice that most of the underground liquefied petroleum gas (LPG) storage caverns are constructed in unlined rock caverns (URCs), where the variability of hydraulic properties (in particular, hydraulic conductivity) has significant impacts on hydrologic containment performance. However, it is practically impossible to characterize the spatial distribution of these properties in detail at the site of URCs. This dilemma forces us to cope with uncertainty in our evaluations of gas containment. As a consequence, the uncertainty-based analysis is deemed more appropriate than the traditional deterministic analysis. The objectives of this paper are 1) to introduce a numerical first order method to calculate the gas containment reliability within a heterogeneous, two-dimensional unlined rock caverns, and 2) to suggest a strategy for improving the gas containment reliability. In order to achieve these goals, we first introduced the stochastic continuum representation of saturated hydraulic conductivity (Ks) of fractured rock and analyzed the spatial variability of Ks at a field site. We then conducted deterministic simulations to demonstrate the importance of heterogeneity of Ks in the analysis of gas tightness performance of URCs. Considering the uncertainty of the heterogeneity in the real world situations, we subsequently developed a numerical first order method (NFOM) to determine the gas tightness reliability at crucial locations of URCs. Using the NFOM, the effect of spatial variability of Ks on gas tightness reliability was investigated. Results show that as variance or spatial structure anisotropy of Ks increases, most of the gas tightness reliability at crucial locations reduces. Meanwhile, we compare the results of NFOM with those of Monte Carlo simulation, and we find the accuracy of NFOM is mainly affected by the magnitude of the variance of Ks. At last, for improving gas containment reliability at crucial locations at this study site, we suggest that vertical water-curtain holes should be installed in the pillar rather than increasing density of horizontal water-curtain boreholes.

  15. A new validated method for the simultaneous determination of benzocaine, propylparaben and benzyl alcohol in a bioadhesive gel by HPLC.

    PubMed

    Pérez-Lozano, P; García-Montoya, E; Orriols, A; Miñarro, M; Ticó, J R; Suñé-Negre, J M

    2005-10-04

    A new HPLC-RP method has been developed and validated for the simultaneous determination of benzocaine, two preservatives (propylparaben (nipasol) and benzyl alcohol) and degradation products of benzocaine in a semisolid pharmaceutical dosage form (benzocaine gel). The method uses a Nucleosil 120 C18 column and gradient elution. The mobile phase consisted of a mixture of methanol and glacial acetic acid (10%, v/v) at different proportion according to a time-schedule programme, pumped at a flow rate of 2.0 ml min(-1). The DAD detector was set at 258 nm. The validation study was carried out fulfilling the ICH guidelines in order to prove that the new analytical method, meets the reliability characteristics, and these characteristics showed the capacity of analytical method to keep, throughout the time, the fundamental criteria for validation: selectivity, linearity, precision, accuracy and sensitivity. The method was applied during the quality control of benzocaine gel in order to quantify the drug (benzocaine), preservatives and degraded products and proved to be suitable for rapid and reliable quality control method.

  16. Real-Time GNSS-Based Attitude Determination in the Measurement Domain.

    PubMed

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-02-05

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.

  17. A third-order approximation method for three-dimensional wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Negretti, Daniele

    2012-03-01

    Multibody train analysis is used increasingly by railway operators whenever a reliable and time-efficient method to evaluate the contact between wheel and rail is needed; particularly, the wheel-rail contact is one of the most important aspects that affects a reliable and time-efficient vehicle dynamics computation. The focus of the approach proposed here is to carry out such tasks by means of online wheel-rail elastic contact detection. In order to improve efficiency and save time, a main analytical approach is used for the definition of wheel and rail surfaces as well as for contact detection, then a final numerical evaluation is used to locate contact. The final numerical procedure consists in finding the zeros of a nonlinear function in a single variable. The overall method is based on the approximation of the wheel surface, which does not influence the contact location significantly, as shown in the paper.

  18. Confirmatory Factor Analysis of the Malay Version of the Confusion, Hubbub and Order Scale (CHAOS-6) among Myocardial Infarction Survivors in a Malaysian Cardiac Healthcare Facility

    PubMed Central

    Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul

    2017-01-01

    Background The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. Methods The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version’s validity. Composite reliability was tested to determine the scale’s reliability. Results All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. Conclusion The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6), for the Malaysian population. PMID:28951688

  19. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  20. The quality of orthodontic practice websites.

    PubMed

    Parekh, J; Gill, D S

    2014-05-01

    To evaluate orthodontic practice websites for the reliability of information presented, accessibility, usability for patients and compliance to General Dental Council (GDC) regulations on ethical advertising. World Wide Web. The term 'orthodontic practice' was entered into three separate search engines. The 30 websites from the UK were selected and graded according to the LIDA tool (a validated method of evaluating healthcare websites) for accessibility, usability of the website and reliability of information on orthodontic treatment. The websites were then evaluated against the GDC's Principles for ethical advertising in nine different criteria. On average, each website fulfilled six out of nine points of the GDC's criteria, with inclusion of a complaints policy being the most poorly fulfilled criteria. The mean LIDA score (a combination of usability, reliability and accessibility) was 102/144 (standard deviation 8.38). The websites scored most poorly on reliability (average 43% SD 11.7), with no single website reporting a clear, reliable method of content production. Average accessibility was 81% and usability 73%. In general, websites did not comply with GDC guidelines on ethical advertising. Furthermore, practitioners should consider reporting their method of information production, particularly when making claims about efficiency and speed of treatment in order to improve reliability.

  1. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  2. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  3. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  4. Effective core potential calculations on small molecules containing transition metal atoms

    NASA Astrophysics Data System (ADS)

    Gropen, O.; Wahlgren, U.; Pettersson, L.

    1982-04-01

    A series of test calculations on diatomic oxides and hydrides of Sc, Ti, Cr, Ni and Zn have been carried out in order to test the reliability of some pseudopotential methods. Several different forms of some pseudopotential operators were used. Only the highest valence orbitals of each atomic symmetry were explicitly included in the calculations. The results indicate that there are problems associated with all the investigated operators particularly for the lighter transition elements. It is suggested that more reliable results may be obtained with pseudopotential methods using smaller cores.

  5. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure.

    PubMed

    Hall, William J

    2016-11-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school.

  6. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure

    PubMed Central

    Hall, William J.

    2017-01-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school. PMID:28194041

  7. A novel optimal configuration form redundant MEMS inertial sensors based on the orthogonal rotation method.

    PubMed

    Cheng, Jianhua; Dong, Jinlu; Landry, Rene; Chen, Daidai

    2014-07-29

    In order to improve the accuracy and reliability of micro-electro mechanical systems (MEMS) navigation systems, an orthogonal rotation method-based nine-gyro redundant MEMS configuration is presented. By analyzing the accuracy and reliability characteristics of an inertial navigation system (INS), criteria for redundant configuration design are introduced. Then the orthogonal rotation configuration is formed through a two-rotation of a set of orthogonal inertial sensors around a space vector. A feasible installation method is given for the real engineering realization of this proposed configuration. The performances of the novel configuration and another six configurations are comprehensively compared and analyzed. Simulation and experimentation are also conducted, and the results show that the orthogonal rotation configuration has the best reliability, accuracy and fault detection and isolation (FDI) performance when the number of gyros is nine.

  8. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Treesearch

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  9. Semi-automatic for ultrasonic measurement of texture

    DOEpatents

    Thompson, R. Bruce; Smith, John F.; Lee, Seung S.; Li, Yan

    1990-02-13

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improved reliability and accuracy. The method can be utilized in production on moving metal plate or sheet.

  10. Semi-automatic for ultrasonic measurement of texture

    DOEpatents

    Thompson, R.B.; Smith, J.F.; Lee, S.S.; Li, Y.

    1990-02-13

    A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improved reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figs.

  11. Choosing the optimal wind turbine variant using the ”ELECTRE” method

    NASA Astrophysics Data System (ADS)

    Ţişcă, I. A.; Anuşca, D.; Dumitrescu, C. D.

    2017-08-01

    This paper presents a method of choosing the “optimal” alternative, both under certainty and under uncertainty, based on relevant analysis criteria. Taking into account that a product can be assimilated to a system and that the reliability of the system depends on the reliability of its components, the choice of product (the appropriate system decision) can be done using the “ELECTRE” method and depending on the level of reliability of each product. In the paper, the “ELECTRE” method is used in choosing the optimal version of a wind turbine required to equip a wind farm in western Romania. The problems to be solved are related to the current situation of wind turbines that involves reliability problems. A set of criteria has been proposed to compare two or more products from a range of available products: Operating conditions, Environmental conditions during operation, Time requirements. Using the ELECTRE hierarchical mathematical method it was established that on the basis of the obtained coefficients of concordance the optimal variant of the wind turbine and the order of preference of the variants are determined, the values chosen as limits being arbitrary.

  12. Real-Time GNSS-Based Attitude Determination in the Measurement Domain

    PubMed Central

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-01-01

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434

  13. Limit states and reliability-based pipeline design. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.

    1997-06-01

    This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during developmentmore » and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.« less

  14. Reliability of semiautomated computational methods for estimating tibiofemoral contact stress in the Multicenter Osteoarthritis Study.

    PubMed

    Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A

    2012-01-01

    Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  15. Identifying resonance frequency deviations for high order nano-wire ring resonator filters based on a coupling strength variation

    NASA Astrophysics Data System (ADS)

    Park, Sahnggi; Kim, Kap-Joong; Kim, Duk-Jun; Kim, Gyungock

    2009-02-01

    Third order ring resonators are designed and their resonance frequency deviations are analyzed experimentally by processing them with E-beam lithography and ICP etching in a CMOS nano-Fabrication laboratory. We developed a reliable method to identify and reduce experimentally the degree of deviation of each ring resonance frequency before completion of the fabrication process. The identified deviations can be minimized by the way to be presented in this paper. It is expected that this method will provide a significant clue to make a high order multi-channel ring resonators.

  16. 78 FR 21929 - Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... Relay Loadability Reliability Standard; Notice of Compliance Filing Take notice that on February 19... Relay Loadability Reliability Standard, Order No. 733, 130 FERC ] 61,221 (2010) (Order No. 733); order..., 136 FERC ] 61,185 (2011). \\2\\ Transmission Relay Loadability Reliability Standard, 138 FERC ] 61,197...

  17. A Reliability Estimation in Modeling Watershed Runoff With Uncertainties

    NASA Astrophysics Data System (ADS)

    Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.

    1990-10-01

    The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.

  18. Reliability and cost analysis methods

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.

    1991-01-01

    In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.

  19. Charactering baseline shift with 4th polynomial function for portable biomedical near-infrared spectroscopy device

    NASA Astrophysics Data System (ADS)

    Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting

    2018-02-01

    The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.

  20. Total Longitudinal Moment Calculation and Reliability Analysis of Yacht Structures

    NASA Astrophysics Data System (ADS)

    Zhi, Wenzheng; Lin, Shaofen

    In order to check the reliability of the yacht in FRP (Fiber Reinforce Plastic) materials, in this paper, the vertical force and the calculation method of the overall longitudinal bending moment on yacht was analyzed. Specially, this paper focuses on the impact of speed on the still water bending moment on yacht. Then considering the mechanical properties of the cap type stiffeners in composite materials, the ultimate bearing capacity of the yacht has been worked out, finally the reliability of the yacht was calculated with using response surface methodology. The result can be used in yacht design and yacht driving.

  1. Skeletal age estimation for forensic purposes: A comparison of GP, TW2 and TW3 methods on an Italian sample.

    PubMed

    Pinchi, Vilma; De Luca, Federica; Ricciardi, Federico; Focardi, Martina; Piredda, Valentina; Mazzeo, Elena; Norelli, Gian-Aristide

    2014-05-01

    Paediatricians, radiologists, anthropologists and medico-legal specialists are often called as experts in order to provide age estimation (AE) for forensic purposes. The literature recommends performing the X-rays of the left hand and wrist (HW-XR) for skeletal age estimation. The method most frequently employed is the Greulich and Pyle (GP) method. In addition, the so-called bone-specific techniques are also applied including the method of Tanner Whitehouse (TW) in the latest versions TW2 and TW3. To compare skeletal age and chronological age in a large sample of children and adolescents using GP, TW2 and TW3 methods in order to establish which of these is the most reliable for forensic purposes. The sample consisted of 307 HW-XRs of Italian children or adolescents, 145 females and 162 males aged between 6 and 20 years. The radiographies were scored according to the GP, TW2RUS and TW3RUS methods by one investigator. The results' reliability was assessed using intraclass correlation coefficient. Wilcoxon signed-rank test and Student t-test were performed to search for significant differences between skeletal and chronological ages. The distributions of the differences between estimated and chronological age, by means of boxplots, show how median differences for TW3 and GP methods are generally very close to 0. Hypothesis tests' results were obtained, with respect to the sex, both for the entire group of individuals and people grouped by age. Results show no significant differences among estimated and chronological age for TW3 and, to a lesser extent, GP. The TW2 proved to be the worst of the three methods. Our results support the conclusion that the TW2 method is not reliable for AE for forensic purpose. The GP and TW3 methods have proved to be reliable in males. For females, the best method was found to be TW3. When performing forensic age estimation in subjects around 14 years of age, it could be advisable to use and associate the TW3 and GP methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Mathematical modeling and fuzzy availability analysis for serial processes in the crystallization system of a sugar plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram

    2017-03-01

    The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.

  3. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  4. Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory

    NASA Astrophysics Data System (ADS)

    Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei

    2015-08-01

    For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.

  5. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less

  6. Reliability of the Inverse Water Volumetry Method to Measure the Volume of the Upper Limb.

    PubMed

    Beek, Martinus A; te Slaa, Alexander; van der Laan, Lijckle; Mulder, Paul G H; Rutten, Harm J T; Voogd, Adri C; Luiten, Ernest J T; Gobardhan, Paul D

    2015-06-01

    Lymphedema of the upper extremity is a common side effect of lymph node dissection or irradiation of the axilla. Several techniques are being applied in order to examine the presence and severity of lymphedema. Measurement of circumference of the upper extremity is most frequently performed. An alternative is the water-displacement method. The aim of this study was to determine the reliability and the reproducibility of the "Inverse Water Volumetry apparatus" (IWV-apparatus) for the measurement of arm volumes. The IWV-apparatus is based on the water-displacement method. Measurements were performed by three breast cancer nurse practitioners on ten healthy volunteers in three weekly sessions. The intra-class correlation coefficient, defined as the ratio of the subject component to the total variance, equaled 0.99. The reliability index is calculated as 0.14 kg. This indicates that only changes in a patient's arm volume measurement of more than 0.14 kg would represent a true change in arm volume, which is about 6% of the mean arm volume of 2.3 kg. The IWV-apparatus proved to be a reliable and reproducible method to measure arm volume.

  7. A Capacity Forecast Model for Volatile Data in Maintenance Logistics

    NASA Astrophysics Data System (ADS)

    Berkholz, Daniel

    2009-05-01

    Maintenance, repair and overhaul processes (MRO processes) are elaborate and complex. Rising demands on these after sales services require reliable production planning and control methods particularly for maintaining valuable capital goods. Downtimes lead to high costs and an inability to meet delivery due dates results in severe contract penalties. Predicting the required capacities for maintenance orders in advance is often difficult due to unknown part conditions unless the goods are actually inspected. This planning uncertainty results in extensive capital tie-up by rising stock levels within the whole MRO network. The article outlines an approach to planning capacities when maintenance data forecasting is volatile. It focuses on the development of prerequisites for a reliable capacity planning model. This enables a quick response to maintenance orders by employing appropriate measures. The information gained through the model is then systematically applied to forecast both personnel capacities and the demand for spare parts. The improved planning reliability can support MRO service providers in shortening delivery times and reducing stock levels in order to enhance the performance of their maintenance logistics.

  8. FY12 End of Year Report for NEPP DDR2 Reliability

    NASA Technical Reports Server (NTRS)

    Guertin, Steven M.

    2013-01-01

    This document reports the status of the NASA Electronic Parts and Packaging (NEPP) Double Data Rate 2 (DDR2) Reliability effort for FY2012. The task expanded the focus of evaluating reliability effects targeted for device examination. FY11 work highlighted the need to test many more parts and to examine more operating conditions, in order to provide useful recommendations for NASA users of these devices. This year's efforts focused on development of test capabilities, particularly focusing on those that can be used to determine overall lot quality and identify outlier devices, and test methods that can be employed on components for flight use. Flight acceptance of components potentially includes considerable time for up-screening (though this time may not currently be used for much reliability testing). Manufacturers are much more knowledgeable about the relevant reliability mechanisms for each of their devices. We are not in a position to know what the appropriate reliability tests are for any given device, so although reliability testing could be focused for a given device, we are forced to perform a large campaign of reliability tests to identify devices with degraded reliability. With the available up-screening time for NASA parts, it is possible to run many device performance studies. This includes verification of basic datasheet characteristics. Furthermore, it is possible to perform significant pattern sensitivity studies. By doing these studies we can establish higher reliability of flight components. In order to develop these approaches, it is necessary to develop test capability that can identify reliability outliers. To do this we must test many devices to ensure outliers are in the sample, and we must develop characterization capability to measure many different parameters. For FY12 we increased capability for reliability characterization and sample size. We increased sample size this year by moving from loose devices to dual inline memory modules (DIMMs) with an approximate reduction of 20 to 50 times in terms of per device under test (DUT) cost. By increasing sample size we have improved our ability to characterize devices that may be considered reliability outliers. This report provides an update on the effort to improve DDR2 testing capability. Although focused on DDR2, the methods being used can be extended to DDR and DDR3 with relative ease.

  9. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  10. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  11. Predicting laser weld reliability with stochastic reduced-order models. Predicting laser weld reliability

    DOE PAGES

    Emery, John M.; Field, Richard V.; Foulk, James W.; ...

    2015-05-26

    Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less

  12. A comparison of matrix methods for calculating eigenvalues in acoustically lined ducts

    NASA Technical Reports Server (NTRS)

    Watson, W.; Lansing, D. L.

    1976-01-01

    Three approximate methods - finite differences, weighted residuals, and finite elements - were used to solve the eigenvalue problem which arises in finding the acoustic modes and propagation constants in an absorptively lined two-dimensional duct without airflow. The matrix equations derived for each of these methods were solved for the eigenvalues corresponding to various values of wall impedance. Two matrix orders, 20 x 20 and 40 x 40, were used. The cases considered included values of wall admittance for which exact eigenvalues were known and for which several nearly equal roots were present. Ten of the lower order eigenvalues obtained from the three approximate methods were compared with solutions calculated from the exact characteristic equation in order to make an assessment of the relative accuracy and reliability of the three methods. The best results were given by the finite element method using a cubic polynomial. Excellent accuracy was consistently obtained, even for nearly equal eigenvalues, by using a 20 x 20 order matrix.

  13. Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis

    PubMed Central

    Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan

    2016-01-01

    Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193

  14. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  15. Structural design of high-performance capacitive accelerometers using parametric optimization with uncertainties

    NASA Astrophysics Data System (ADS)

    Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli

    2017-03-01

    Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.

  16. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  17. Complex method to calculate objective assessments of information systems protection to improve expert assessments reliability

    NASA Astrophysics Data System (ADS)

    Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.

    2018-01-01

    The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.

  18. Inter-rater reliability of a modified version of Delitto et al.’s classification-based system for low back pain: a pilot study

    PubMed Central

    Apeldoorn, Adri T.; van Helvoirt, Hans; Ostelo, Raymond W.; Meihuizen, Hanneke; Kamper, Steven J.; van Tulder, Maurits W.; de Vet, Henrica C. W.

    2016-01-01

    Study design Observational inter-rater reliability study. Objectives To examine: (1) the inter-rater reliability of a modified version of Delitto et al.’s classification-based algorithm for patients with low back pain; (2) the influence of different levels of familiarity with the system; and (3) the inter-rater reliability of algorithm decisions in patients who clearly fit into a subgroup (clear classifications) and those who do not (unclear classifications). Methods Patients were examined twice on the same day by two of three participating physical therapists with different levels of familiarity with the system. Patients were classified into one of four classification groups. Raters were blind to the others’ classification decision. In order to quantify the inter-rater reliability, percentages of agreement and Cohen’s Kappa were calculated. Results A total of 36 patients were included (clear classification n = 23; unclear classification n = 13). The overall rate of agreement was 53% and the Kappa value was 0·34 [95% confidence interval (CI): 0·11–0·57], which indicated only fair inter-rater reliability. Inter-rater reliability for patients with a clear classification (agreement 52%, Kappa value 0·29) was not higher than for patients with an unclear classification (agreement 54%, Kappa value 0·33). Familiarity with the system (i.e. trained with written instructions and previous research experience with the algorithm) did not improve the inter-rater reliability. Conclusion Our pilot study challenges the inter-rater reliability of the classification procedure in clinical practice. Therefore, more knowledge is needed about factors that affect the inter-rater reliability, in order to improve the clinical applicability of the classification scheme. PMID:27559279

  19. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  20. The protonation of N2O reexamined - A case study on the reliability of various electron correlation methods for minima and transition states

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.; Lee, Timothy J.

    1993-01-01

    The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.

  1. A novel evaluation strategy for fatigue reliability of flexible nanoscale films

    NASA Astrophysics Data System (ADS)

    Zheng, Si-Xue; Luo, Xue-Mei; Wang, Dong; Zhang, Guang-Ping

    2018-03-01

    In order to evaluate fatigue reliability of nanoscale metal films on flexible substrates, here we proposed an effective evaluation way to obtain critical fatigue cracking strain based on the direct observation of fatigue damage sites through conventional dynamic bending testing technique. By this method, fatigue properties and damage behaviors of 930 nm-thick Au films and 600 nm-thick Mo-W multilayers with individual layer thickness 100 nm on flexible polyimide substrates were investigated. Coffin-Manson relationship between the fatigue life and the applied strain range was obtained for the Au films and Mo-W multilayers. The characterization of fatigue damage behaviors verifies the feasibility of this method, which seems easier and more effective comparing with the other testing methods.

  2. Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding

    NASA Astrophysics Data System (ADS)

    Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin

    We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.

  3. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  4. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  5. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    PubMed

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Reducing the Risk of Damage to Power Transformers of 110 kV and Above Accompanying Internal Short Circuits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L’vova, M. M.; L’vov, S. Yu.; Komarov, V. B.

    Methods of increasing the operating reliability of power transformers, autotransformers and shunting reactors in order to reduce the risk of damage, which accompany internal short circuits and equipment fires and explosions, are considered.

  7. Exact solutions to the time-fractional differential equations via local fractional derivatives

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan; Bekir, Ahmet

    2018-01-01

    This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.

  8. Quality Evaluation of Raw Moutan Cortex Using the AHP and Gray Correlation-TOPSIS Method

    PubMed Central

    Zhou, Sujuan; Liu, Bo; Meng, Jiang

    2017-01-01

    Background: Raw Moutan cortex (RMC) is an important Chinese herbal medicine. Comprehensive and objective quality evaluation of Chinese herbal medicine has been one of the most important issues in the modern herbs development. Objective: To evaluate and compare the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Materials and Methods: The percentage composition of gallic acid, catechin, oxypaeoniflorin, paeoniflorin, quercetin, benzoylpaeoniflorin, paeonol in different batches of RMC was determined, and then adopting MATLAB programming to construct the gray correlation-TOPSIS assessment model for quality evaluation of RMC. Results: The quality evaluation results of model evaluation and objective evaluation were consistent, reliable, and stable. Conclusion: The model of gray correlation-TOPSIS can be well applied to the quality evaluation of traditional Chinese medicine with multiple components and has broad prospect in application. SUMMARY The experiment tries to construct a model to evaluate the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Results show the model is reliable and provide a feasible way in evaluating quality of traditional Chinese medicine with multiple components. PMID:28839384

  9. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    PubMed

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  10. 78 FR 55249 - Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-10

    ...; RM11-16-000] Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing Take.... \\1\\ Transmission Relay Loadability Reliability Standard, Order No. 733, 130 FERC ] 61, 221 (2010..., Order No. 733-B, 136 FERC ] 61,185 (2011). \\2\\ Transmission Relay Loadability Reliability Standard, 138...

  11. Inventing the future of reliability: FERC's recent orders and the consolidation of reliability authority

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skees, J. Daniel

    2010-06-15

    The Energy Policy Act of 2005 established mandatory reliability standard enforcement under a system in which the Federal Energy Regulatory Commission and the Electric Reliability Organization would have their own spheres of responsibility and authority. Recent orders, however, reflect the Commission's frustration with the reliability standard drafting process and suggest that the Electric Reliability Organization's discretion is likely to receive less deference in the future. (author)

  12. Laser notching ceramics for reliable fracture toughness testing

    DOE PAGES

    Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...

    2015-09-19

    A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less

  13. Numerical simulation of the cavitation characteristics of a mixed-flow pump

    NASA Astrophysics Data System (ADS)

    Chen, T.; Li, S. R.; Li, W. Z.; Liu, Y. L.; Wu, D. Z.; Wang, L. Q.

    2013-12-01

    As a kind of general equipment for fluid transportation, pumps were widely used in industry which includes many applications of high pressure, temperature and toxic fluids transportations. Performances of pumps affect the safety and reliability of the whole special equipment system. Cavitation in pumps cause the loss of performance and erosion of the blade, which could affect the running stability and reliability of the pump system. In this paper, a kind of numerical method for cavitaion performance prediction was presented. In order to investigate the accuracy of the method, CFD flow analysis and cavitation performance predictions of a mixed-flow pump were carried out. The numerical results were compared with the test results.

  14. Fatigue Reliability of Gas Turbine Engine Structures

    NASA Technical Reports Server (NTRS)

    Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.

    1997-01-01

    The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

  15. Structural reliability methods: Code development status

    NASA Astrophysics Data System (ADS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  16. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  17. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less

  18. Methods for slow axis beam quality improvement of high power broad area diode lasers

    NASA Astrophysics Data System (ADS)

    An, Haiyan; Xiong, Yihan; Jiang, Ching-Long J.; Schmidt, Berthold; Treusch, Georg

    2014-03-01

    For high brightness direct diode laser systems, it is of fundamental importance to improve the slow axis beam quality of the incorporated laser diodes regardless what beam combining technology is applied. To further advance our products in terms of increased brightness at a high power level, we must optimize the slow axis beam quality despite the far field blooming at high current levels. The later is caused predominantly by the built-in index step in combination with the thermal lens effect. Most of the methods for beam quality improvements reported in publications sacrifice the device efficiency and reliable output power. In order to improve the beam quality as well as maintain the efficiency and reliable output power, we investigated methods of influencing local heat generation to reduce the thermal gradient across the slow axis direction, optimizing the built-in index step and discriminating high order modes. Based on our findings, we have combined different methods in our new device design. Subsequently, the beam parameter product (BPP) of a 10% fill factor bar has improved by approximately 30% at 7 W/emitter without efficiency penalty. This technology has enabled fiber coupled high brightness multi-kilowatt direct diode laser systems. In this paper, we will elaborate on the methods used as well as the results achieved.

  19. The use and reliability of SymNose for quantitative measurement of the nose and lip in unilateral cleft lip and palate patients.

    PubMed

    Mosmuller, David; Tan, Robin; Mulder, Frans; Bachour, Yara; de Vet, Henrica; Don Griot, Peter

    2016-10-01

    It is essential to have a reliable assessment method in order to compare the results of cleft lip and palate surgery. In this study the computer-based program SymNose, a method for quantitative assessment of the nose and lip, will be assessed on usability and reliability. The symmetry of the nose and lip was measured twice in 50 six-year-old complete and incomplete unilateral cleft lip and palate patients by four observers. For the frontal view the asymmetry level of the nose and upper lip were evaluated and for the basal view the asymmetry level of the nose and nostrils were evaluated. A mean inter-observer reliability when tracing each image once or twice was 0.70 and 0.75, respectively. Tracing the photographs with 2 observers and 4 observers gave a mean inter-observer score of 0.86 and 0.92, respectively. The mean intra-observer reliability varied between 0.80 and 0.84. SymNose is a practical and reliable tool for the retrospective assessment of large caseloads of 2D photographs of cleft patients for research purposes. Moderate to high single inter-observer reliability was found. For future research with SymNose reliable outcomes can be achieved by using the average outcomes of single tracings of two observers. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models

    PubMed Central

    2018-01-01

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the ‘Internet of Things’ (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds. PMID:29748521

  1. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  2. The reliability of the Australasian Triage Scale: a meta-analysis

    PubMed Central

    Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir

    2015-01-01

    BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538

  3. Smart accelerometer

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1992-02-01

    The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.

  4. Smart accelerometer

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1994-05-01

    The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.

  5. Smart accelerometer. [vibration damage detection

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.

  6. Cell tracking for cell image analysis

    NASA Astrophysics Data System (ADS)

    Bise, Ryoma; Sato, Yoichi

    2017-04-01

    Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.

  7. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  8. Research of vibration controlling based on programmable logic controller for electrostatic precipitator

    NASA Astrophysics Data System (ADS)

    Zhang, Zisheng; Li, Yanhu; Li, Jiaojiao; Liu, Zhiqiang; Li, Qing

    2013-03-01

    In order to improve the reliability, stability and automation of electrostatic precipitator, circuits of vibration motor for ESP and vibration control ladder diagram program are investigated using Schneider PLC with high performance and programming software of Twidosoft. Operational results show that after adopting PLC, vibration motor can run automatically; compared with traditional control system of vibration based on single-chip microcomputer, it has higher reliability, better stability and higher dust removal rate, when dust emission concentrations <= 50 mg m-3, providing a new method for vibration controlling of ESP.

  9. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  10. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  11. New methods for analyzing semantic graph based assessments in science education

    NASA Astrophysics Data System (ADS)

    Vikaros, Lance Steven

    This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.

  12. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  13. Finger-vein and fingerprint recognition based on a feature-level fusion method

    NASA Astrophysics Data System (ADS)

    Yang, Jinfeng; Hong, Bofeng

    2013-07-01

    Multimodal biometrics based on the finger identification is a hot topic in recent years. In this paper, a novel fingerprint-vein based biometric method is proposed to improve the reliability and accuracy of the finger recognition system. First, the second order steerable filters are used here to enhance and extract the minutiae features of the fingerprint (FP) and finger-vein (FV). Second, the texture features of fingerprint and finger-vein are extracted by a bank of Gabor filter. Third, a new triangle-region fusion method is proposed to integrate all the fingerprint and finger-vein features in feature-level. Thus, the fusion features contain both the finger texture-information and the minutiae triangular geometry structure. Finally, experimental results performed on the self-constructed finger-vein and fingerprint databases are shown that the proposed method is reliable and precise in personal identification.

  14. Modeling erosion in a southern New Mexico watershed using agwa: Sensitivity to variations of input precision and scale

    USDA-ARS?s Scientific Manuscript database

    Rangeland environments are particularly susceptible to erosion due to extreme rainfall events and low vegetation cover. Landowners and managers need access to reliable erosion evaluation methods in order to protect productivity and hydrologic integrity of their rangelands and make resource allocati...

  15. 78 FR 48422 - Agency Information Collection Activities: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-08

    ... quantitative data through surveys with working-age (age 18-61) and older American (age 62 and older) consumers in order to develop and refine survey instruments that will enable the CFPB to reliably and... conducting research to identify methods and strategies to educate and counsel seniors, and developing goals...

  16. Application of a swarm-based approach for phase unwrapping

    NASA Astrophysics Data System (ADS)

    da S. Maciel, Lucas; Albertazzi G., Armando, Jr.

    2014-07-01

    An algorithm for phase unwrapping based on swarm intelligence is proposed. The novel approach is based on the emergent behavior of swarms. This behavior is the result of the interactions between independent agents following a simple set of rules and is regarded as fast, flexible and robust. The rules here were designed with two purposes. Firstly, the collective behavior must result in a reliable map of the unwrapped phase. The unwrapping reliability was evaluated by each agent during run-time, based on the quality of the neighboring pixels. In addition, the rule set must result in a behavior that focuses on wrapped regions. Stigmergy and communication rules were implemented in order to enable each agent to seek less worked areas of the image. The agents were modeled as Finite-State Machines. Based on the availability of unwrappable pixels, each agent assumed a different state in order to better adapt itself to the surroundings. The implemented rule set was able to fulfill the requirements on reliability and focused unwrapping. The unwrapped phase map was comparable to those from established methods as the agents were able to reliably evaluate each pixel quality. Also, the unwrapping behavior, being observed in real time, was able to focus on workable areas as the agents communicated in order to find less traveled regions. The results were very positive for such a new approach to the phase unwrapping problem. Finally, the authors see great potential for future developments concerning the flexibility, robustness and processing times of the swarm-based algorithm.

  17. The reliability of workplace-based assessment in postgraduate medical education and training: a national evaluation in general practice in the United Kingdom.

    PubMed

    Murphy, Douglas J; Bruce, David A; Mercer, Stewart W; Eva, Kevin W

    2009-05-01

    To investigate the reliability and feasibility of six potential workplace-based assessment methods in general practice training: criterion audit, multi-source feedback from clinical and non-clinical colleagues, patient feedback (the CARE Measure), referral letters, significant event analysis, and video analysis of consultations. Performance of GP registrars (trainees) was evaluated with each tool to assess the reliabilities of the tools and feasibility, given raters and number of assessments needed. Participant experience of process determined by questionnaire. 171 GP registrars and their trainers, drawn from nine deaneries (representing all four countries in the UK), participated. The ability of each tool to differentiate between doctors (reliability) was assessed using generalisability theory. Decision studies were then conducted to determine the number of observations required to achieve an acceptably high reliability for "high-stakes assessment" using each instrument. Finally, descriptive statistics were used to summarise participants' ratings of their experience using these tools. Multi-source feedback from colleagues and patient feedback on consultations emerged as the two methods most likely to offer a reliable and feasible opinion of workplace performance. Reliability co-efficients of 0.8 were attainable with 41 CARE Measure patient questionnaires and six clinical and/or five non-clinical colleagues per doctor when assessed on two occasions. For the other four methods tested, 10 or more assessors were required per doctor in order to achieve a reliable assessment, making the feasibility of their use in high-stakes assessment extremely low. Participant feedback did not raise any major concerns regarding the acceptability, feasibility, or educational impact of the tools. The combination of patient and colleague views of doctors' performance, coupled with reliable competence measures, may offer a suitable evidence-base on which to monitor progress and completion of doctors' training in general practice.

  18. Multi-criteria decision making development of ion chromatographic method for determination of inorganic anions in oilfield waters based on artificial neural networks retention model.

    PubMed

    Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko

    2012-02-24

    This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Second-order variational equations for N-body simulations

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2016-07-01

    First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.

  20. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  1. Courses of Action to Optimize Heavy Bearings Cages

    NASA Astrophysics Data System (ADS)

    Szekely, V. G.

    2016-11-01

    The global expansion in the industrial, economically and technological context determines the need to develop products, technologies, processes and methods which ensure increased performance, lower manufacturing costs and synchronization of the main costs reported to the elementary values which correspond to utilization”. The development trend of the heavy bearing industry and the wide use of bearings determines the necessity of choosing the most appropriate material for a given application in order to meet the cumulative requirements of durability, reliability, strength, etc. Evaluation of commonly known or new materials represents a fundamental criterion, in order to choose the materials based on the cost, machinability and the technological process. In order to ensure the most effective basis for the decision, regarding the heavy bearing cage, in the first stage the functions of the product are established and in a further step a comparative analysis of the materials is made in order to establish the best materials which satisfy the product functions. The decision for selecting the most appropriate material is based largely on the overlapping of the material costs and manufacturing process during which the half-finished material becomes a finished product. The study is orientated towards a creative approach, especially towards innovation and reengineering by using specific techniques and methods applied in inventics. The main target is to find new efficient and reliable constructive and/or technological solutions which are consistent with the concept of sustainable development.

  2. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    NASA Astrophysics Data System (ADS)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-07-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding-cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are - 15 dB and - 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( - 55 °C-80 °C ) and strength durability (160-1600μɛ, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life.

  3. Short assessment of the Big Five: robust across survey methods except telephone interviewing.

    PubMed

    Lang, Frieder R; John, Dennis; Lüdtke, Oliver; Schupp, Jürgen; Wagner, Gert G

    2011-06-01

    We examined measurement invariance and age-related robustness of a short 15-item Big Five Inventory (BFI-S) of personality dimensions, which is well suited for applications in large-scale multidisciplinary surveys. The BFI-S was assessed in three different interviewing conditions: computer-assisted or paper-assisted face-to-face interviewing, computer-assisted telephone interviewing, and a self-administered questionnaire. Randomized probability samples from a large-scale German panel survey and a related probability telephone study were used in order to test method effects on self-report measures of personality characteristics across early, middle, and late adulthood. Exploratory structural equation modeling was used in order to test for measurement invariance of the five-factor model of personality trait domains across different assessment methods. For the short inventory, findings suggest strong robustness of self-report measures of personality dimensions among young and middle-aged adults. In old age, telephone interviewing was associated with greater distortions in reliable personality assessment. It is concluded that the greater mental workload of telephone interviewing limits the reliability of self-report personality assessment. Face-to-face surveys and self-administrated questionnaire completion are clearly better suited than phone surveys when personality traits in age-heterogeneous samples are assessed.

  4. Metrological approach to quantitative analysis of clinical samples by LA-ICP-MS: A critical review of recent studies.

    PubMed

    Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta

    2018-05-15

    Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Reliable femoral frame construction based on MRI dedicated to muscles position follow-up.

    PubMed

    Dubois, G; Bonneau, D; Lafage, V; Rouch, P; Skalli, W

    2015-10-01

    In vivo follow-up of muscle shape variation represents a challenge when evaluating muscle development due to disease or treatment. Recent developments in muscles reconstruction techniques indicate MRI as a clinical tool for the follow-up of the thigh muscles. The comparison of 3D muscles shape from two different sequences is not easy because there is no common frame. This study proposes an innovative method for the reconstruction of a reliable femoral frame based on the femoral head and both condyles centers. In order to robustify the definition of condylar spheres, an original method was developed to combine the estimation of diameters of both condyles from the lateral antero-posterior distance and the estimation of the spheres center from an optimization process. The influence of spacing between MR slices and of origin positions was studied. For all axes, the proposed method presented an angular error lower than 1° with spacing between slice of 10 mm and the optimal position of the origin was identified at 56 % of the distance between the femoral head center and the barycenter of both condyles. The high reliability of this method provides a robust frame for clinical follow-up based on MRI .

  6. Reliability and Validity of Modified Service Quality Instrument (SERVQUAL) in Patients’ Motivation to Adhere to Insulin Therapy

    PubMed Central

    Jakupovic, Vedran; Solakovic, Suajb; Celebic, Nedim; Kulovic, Dzenan

    2018-01-01

    Introduction: Diabetes is progressive condition which requires various ways of treatment. Adequate therapy prescribed in the right time helps patient to postpone development of complications. Adherence to complicated therapy is challenge for both patients and HCPs and is subject of research in many disciplines. Improvement in communication between HCP and patients is very important in patient’s adherence to therapy. Aim: Aim of this research was to explore validity and reliability of modified SERVQUAL instrument in attempt to explore ways of motivating diabetic patient to accept prescribed insulin therapy. Material and Methods: We used modified SERVQUAL questionnaire as instrument in the research. It was necessary to check validity and reliability of the new modified instrument. Results: Results show that modified Servqual instrument has excellent reliability (α=0.908), so we could say that it measures precisely Expectations, Perceptions and Motivation at patients. Factor analysis (EFA method) with Varimax rotation extracted 4 factors which together explain 52.902% variance of the results on this subscale. Bifactorial solution could be seen on Scree-plot diagram (break at second factor). Conclusion: Results in this research show that modified Servqual instrument which is created in order to measure expectations and perceptions of the patients is valid and reliable. Reliability and validity are proven indeed in additional dimension which was created originally for this research - motivation to accept insulin therapy. PMID:29670478

  7. Subject-level reliability analysis of fast fMRI with application to epilepsy.

    PubMed

    Hao, Yongfu; Khoo, Hui Ming; von Ellenrieder, Nicolas; Gotman, Jean

    2017-07-01

    Recent studies have applied the new magnetic resonance encephalography (MREG) sequence to the study of interictal epileptic discharges (IEDs) in the electroencephalogram (EEG) of epileptic patients. However, there are no criteria to quantitatively evaluate different processing methods, to properly use the new sequence. We evaluated different processing steps of this new sequence under the common generalized linear model (GLM) framework by assessing the reliability of results. A bootstrap sampling technique was first used to generate multiple replicated data sets; a GLM with different processing steps was then applied to obtain activation maps, and the reliability of these maps was assessed. We applied our analysis in an event-related GLM related to IEDs. A higher reliability was achieved by using a GLM with head motion confound regressor with 24 components rather than the usual 6, with an autoregressive model of order 5 and with a canonical hemodynamic response function (HRF) rather than variable latency or patient-specific HRFs. Comparison of activation with IED field also favored the canonical HRF, consistent with the reliability analysis. The reliability analysis helps to optimize the processing methods for this fast fMRI sequence, in a context in which we do not know the ground truth of activation areas. Magn Reson Med 78:370-382, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Polynomial mixture method of solving ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.

    2017-11-01

    In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).

  9. Spectroscopic and quantum chemical study of the structure of a new paramagnetic dimeric palladium(II,III) complex with creatine

    NASA Astrophysics Data System (ADS)

    Mitewa, Mariana; Enchev, Venelin; Bakalova, Tatyana

    2002-05-01

    The structure and coordination mode of the newly synthesized dimeric paramagnetic Pd(II,III) complex are studied using magneto-chemical, EPR and IR spectroscopic methods. In order to perform reliable assignment of the IR bands, the structure and IR spectrum of the free creatine were calculated using ab initio method. For calculation of the configuration of its deprotonated and doubly deprotonated forms the semiempirical AM1 method was used.

  10. Tau Oligomers as Pathogenic Seeds: Preparation and Propagation In Vitro and In Vivo.

    PubMed

    Gerson, Julia E; Sengupta, Urmi; Kayed, Rakez

    2017-01-01

    Tau oligomers have been shown to be the main toxic tau species in a number of neurodegenerative disorders. In order to study tau oligomers both in vitro and in vivo, we have established methods for the reliable preparation, isolation, and detection of tau oligomers. Methods for the seeding of tau oligomers, isolation of tau oligomers from tissue, and detection of tau oligomers using tau oligomer-specific antibodies by biochemical and immunohistochemical methods are detailed below.

  11. Modulation stability and dispersive optical soliton solutions of higher order nonlinear Schrödinger equation and its applications in mono-mode optical fibers

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad; Seadawy, Aly R.; Lu, Dianchen

    2018-01-01

    In mono-mode optical fibers, the higher order non-linear Schrödinger equation (NLSE) describes the propagation of enormously short light pulses. We constructed optical solitons and, solitary wave solutions of higher order NLSE mono-mode optical fibers via employing modified extended mapping method which has important applications in Mathematics and physics. Furthermore, the formation conditions are also given on parameters in which optical bright and dark solitons can exist for this media. The moment of the obtained solutions are also given graphically, that helps to realize the physical phenomena's of this model. The modulation instability analysis is utilized to discuss the model stability, which verifies that all obtained solutions are exact and stable. Many other such types of models arising in applied sciences can also be solved by this reliable, powerful and effective method. The method can also be functional to other sorts of higher order nonlinear problems in contemporary areas of research.

  12. 18 CFR 376.204 - Delegation of Commission authority during emergency conditions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the Office of Energy Projects; (v) Director of the Office of Electric Reliability; (vi) Director of... Energy Projects, in order of seniority; (x) Deputy Directors, Office of Electric Reliability, in order of... Energy Projects; Assistant Directors and Division heads, Office of Electric Reliability; Deputy Associate...

  13. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  14. Moment Method and Pixel-by-Pixel Method: Complementary Mode Identification I. Testing FG Vir-like pulsation modes

    NASA Astrophysics Data System (ADS)

    Zima, W.; Kolenberg, K.; Briquet, M.; Breger, M.

    2004-06-01

    We have carried out a Hare-and-Hound test to determine the reliability of the Moment Method (Briquet & Aerts 2003) and the Pixel-by-Pixel Method (Mantegazza 2000) for the identification of pulsation modes in Delta Scuti stars. For this purpose we calculated synthetic line profiles, exhibiting six pulsation modes of low degree and with input parameters initially unknown to us. The aim was to test and increase the quality of the mode identification by applying both methods independently and by using a combined technique. Our results show that, whereas the azimuthal order m and its sign can be fixed by both methods, the degree l is not determined unambiguously. Both identification methods show a better reliability if multiple modes are fitted simultaneously. In particular, the inclination angle is better determined. We have to emphasize that the outcome of this test is only meaningful for stars having pulsational velocities below 0.2 vsini. This is the first part of a series of articles, in which we will test these spectroscopic identification methods.

  15. Haemophilus haemolyticus Isolates Causing Clinical Disease

    PubMed Central

    Wang, Xin; Briere, Elizabeth C.; Katz, Lee S.; Cohn, Amanda C.; Clark, Thomas A.; Messonnier, Nancy E.; Mayer, Leonard W.

    2012-01-01

    We report seven cases of Haemophilus haemolyticus invasive disease detected in the United States, which were previously misidentified as nontypeable Haemophilus influenzae. All cases had different symptoms and presentations. Our study suggests that a testing scheme that includes reliable PCR assays and standard microbiological methods should be used in order to improve H. haemolyticus identification. PMID:22573587

  16. Haemophilus haemolyticus isolates causing clinical disease.

    PubMed

    Anderson, Raydel; Wang, Xin; Briere, Elizabeth C; Katz, Lee S; Cohn, Amanda C; Clark, Thomas A; Messonnier, Nancy E; Mayer, Leonard W

    2012-07-01

    We report seven cases of Haemophilus haemolyticus invasive disease detected in the United States, which were previously misidentified as nontypeable Haemophilus influenzae. All cases had different symptoms and presentations. Our study suggests that a testing scheme that includes reliable PCR assays and standard microbiological methods should be used in order to improve H. haemolyticus identification.

  17. Computation of Steady and Unsteady Laminar Flames: Theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai

    1999-01-01

    In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.

  18. Process characteristics and design methods for a 300 deg quad OP amp

    NASA Technical Reports Server (NTRS)

    Beasom, J. D.; Patterson, R. B., III

    1981-01-01

    The results of process characterization, circuit design, and reliability studies for the development of a quad OP amplifier intended for use up to 300 C are presented. A dielectrically isolated complementary vertical bipolar process was chosen to fabricate the amplifier in order to eliminate isolation leakage and the possibility of latch up. Characterization of NPN and PNP junctions showed them to be suitable for use up to 300 C. Interconnect reliability was predicted to be greater than four years mean time between failure. Parasitic MOS formation was eliminated by isolation of each device.

  19. Experience in estimating neutron poison worths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, R.T.; Congdon, S.P.

    1989-01-01

    Gadolinia, {sup 135}Xe, {sup 149}Sm, control rod, and soluble boron are five neutron poisons that may appear in light water reactor assemblies. Reliable neutron poison worth estimation is useful for evaluating core operating strategies, fuel cycle economics, and reactor safety design. Based on physical presence, neutron poisons can be divided into two categories: local poisons and global poisons. Gadolinia and control rod are local poisons, and {sup 135}Xe, {sup 149}Sm, and soluble boron are global poisons. The first-order perturbation method is commonly used to estimate nuclide worths in fuel assemblies. It is well known, however, that the first-order perturbation methodmore » was developed for small perturbations, such as the perturbation due to weak absorbers, and that neutron poisons are not weak absorbers. The authors have developed an improved method to replace the first-order perturbation method, which yields very poor results, for estimating local poison worths. It has also been shown that the first-order perturbation method seems adequate to estimate worths for global poisons caused by flux compensation.« less

  20. The impact of symptom stability on time frame and recall reliability in CFS.

    PubMed

    Evans, Meredyth; Jason, Leonard A

    This study is an investigation of the potential impact of perceived symptom stability on the recall reliability of symptom severity and frequency as reported by individuals with chronic fatigue syndrome (CFS). Symptoms were recalled using three different recall timeframes (the past week, the past month, and the past six months) and at two assessment points (with one week in between each assessment). Participants were 51 adults (45 women and 6 men), between the ages of 29 and 66 with a current diagnosis of CFS. Multilevel Model (MLM) Analyses were used to determine the optimal recall timeframe (in terms of test-retest reliability) for reporting symptoms perceived as variable and as stable over time. Headaches were recalled more reliably when they were reported as stable over time. Furthermore, the optimal timeframe in terms of test-retest reliability for stable symptoms was highly uniform, such that all Fukuda 1 CFS symptoms were more reliably recalled at the six month timeframe. Furthermore, the optimal timeframe for CFS symptoms perceived as variable, differed across symptoms. Symptom stability and recall timeframe are important to consider in order to improve the accuracy and reliability of the current methods for diagnosing this illness.

  1. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  2. Automated reliability assessment for spectroscopic redshift measurements

    NASA Astrophysics Data System (ADS)

    Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2018-03-01

    Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for spectroscopic redshift measurements. This newly-defined method is very promising for next-generation large spectroscopic surveys from the ground and in space, such as Euclid and WFIRST. A table of the reclassified VVDS redshifts and reliability is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A53

  3. Integrated Life-Cycle Framework for Maintenance, Monitoring and Reliability of Naval Ship Structures

    DTIC Science & Technology

    2012-08-15

    number of times, a fast and accurate method for analyzing the ship hull is required. In order to obtain this required computational speed and accuracy...Naval Engineers Fleet Maintenance & Modernization Symposium (FMMS 2011) [8] and the Eleventh International Conference on Fast Sea Transportation ( FAST ...probabilistic strength of the ship hull. First, a novel deterministic method for the fast and accurate calculation of the strength of the ship hull is

  4. The National Shipbuilding Research Program. Environmental Studies and Testing (Phase V)

    DTIC Science & Technology

    2000-11-20

    development of an analytical procedure for toxic organic compounds, including TBT ( tributyltin ), whose turnaround time would be in the order of minutes...Cost of the Subtask was $20,000. Subtask #33 - Turnaround Analytical Method for TBT This Subtask performed a preliminary investigation leading to the...34Quick TBT Analytical Method" that will yield reliable results in 15 minutes, a veritable breakthrough in sampling technology. The Subtask was managed by

  5. Small Independent Action Force (SIAF), Vegetation Classification Study

    DTIC Science & Technology

    1976-03-01

    CONTENTS I. INTRODUCTION 8 II. BACKGBCUND and PORPOSE 10 III. METHOD 16 A. EXPERIMENTAL DESIGN 16 B. SUBJECTS .’ 17 C. APPARATUS 17 D. STIMULUS...reliability of subjects will be obtained. 15 III. METHOD A. EXPERIMENTAL DESIGN . The experiment involved a continous stream of stimuli. Phase 1 stimuli...the attribute to be scaled. The subjecr must designate one of the pair as greater. No equality judgments are permitted. In order to obtain data from

  6. Automated breast tissue density assessment using high order regional texture descriptors in mammography

    NASA Astrophysics Data System (ADS)

    Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun

    2014-03-01

    Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.

  7. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  8. BDS/GPS Dual Systems Positioning Based on the Modified SR-UKF Algorithm

    PubMed Central

    Kong, JaeHyok; Mao, Xuchu; Li, Shaoyuan

    2016-01-01

    The Global Navigation Satellite System can provide all-day three-dimensional position and speed information. Currently, only using the single navigation system cannot satisfy the requirements of the system’s reliability and integrity. In order to improve the reliability and stability of the satellite navigation system, the positioning method by BDS and GPS navigation system is presented, the measurement model and the state model are described. Furthermore, the modified square-root Unscented Kalman Filter (SR-UKF) algorithm is employed in BDS and GPS conditions, and analysis of single system/multi-system positioning has been carried out, respectively. The experimental results are compared with the traditional estimation results, which show that the proposed method can perform highly-precise positioning. Especially when the number of satellites is not adequate enough, the proposed method combine BDS and GPS systems to achieve a higher positioning precision. PMID:27153068

  9. The comparison of predictive scheduling algorithms for different sizes of job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.

    2016-08-01

    In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.

  10. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods.

    PubMed

    May, Michael R; Moore, Brian R

    2016-11-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  11. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods

    PubMed Central

    May, Michael R.; Moore, Brian R.

    2016-01-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081

  12. Orbital-optimized third-order Møller-Plesset perturbation theory and its spin-component and spin-opposite scaled variants: Application to symmetry breaking problems

    NASA Astrophysics Data System (ADS)

    Bozkaya, Uǧur

    2011-12-01

    In this research, orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) and its spin-component and spin-opposite scaled variants (SCS-OMP3 and SOS-OMP3) are introduced. Using a Lagrangian-based approach, an efficient, quadratically convergent algorithm for variational optimization of the molecular orbitals (MOs) for third-order Møller-Plesset perturbation theory (MP3) is presented. Explicit equations for response density matrices, the MO gradient, and Hessian are reported in spin-orbital form. The OMP3, SCS-OMP3, and SOS-OMP3 approaches are compared with the second-order Møller-Plesset perturbation theory (MP2), MP3, coupled-cluster doubles (CCD), optimized-doubles (OD), and coupled-cluster singles and doubles (CCSD) methods. All these methods are applied to the O4 +, O3, and seven diatomic molecules. Results demonstrate that the OMP3 and its variants provide significantly better vibrational frequencies than MP3, CCSD, and OD for the molecules where the symmetry-breaking problems are observed. For O4 +, the OMP3 prediction, 1343 cm-1, for ω6 (b3u) mode, where symmetry-breaking appears, is even better than presumably more reliable methods such as Brueckner doubles (BD), 1194 cm-1, and OD, 1193 cm-1, methods (the experimental value is 1320 cm-1). For O3, the predictions of SCS-OMP3 (1143 cm-1) and SOS-OMP3 (1165 cm-1) are remarkably better than the more robust OD method (1282 cm-1); the experimental value is 1089 cm-1. For the seven diatomics, again the SCS-OMP3 and SOS-OMP3 methods provide the lowest average errors, |Δωe| = 44 and |Δωe| = 35 cm-1, respectively, while for OD, |Δωe| = 161 cm-1and CCSD |Δωe| = 106 cm-1. Hence, the OMP3 and especially its spin-scaled variants perform much better than the MP3, CCSD, and more robust OD approaches for considered test cases. Therefore, considering both the computational cost and the reliability, SCS-OMP3 and SOS-OMP3 appear to be the best methods for the symmetry-breaking cases, based on present application results. The OMP3 method offers certain advantages: it provides reliable vibrational frequencies in case of symmetry-breaking problems, especially with spin-scaling tricks, its analytic gradients are easier to compute since there is no need to solve the coupled-perturbed equations for the orbital response, and the computation of one-electron properties are easier because there is no response contribution to the particle density matrices. The OMP3 has further advantages over standard MP3, making it promising for excited state properties via linear response theory.

  13. Preliminary design and analysis of an advanced rotorcraft transmission

    NASA Technical Reports Server (NTRS)

    Henry, Z. S.

    1990-01-01

    Future rotorcraft transmissions of the 1990s and beyond the year 2000 require the incorporation of key emerging material and component technologies using advanced and innovative design practices in order to meet the requirements for a reduced weight-to-power ratio, a decreased noise level, and a substantially increased reliability. The specific goals for future rotocraft transmissions when compared with current state-of-the-art transmissions are a 25 percent weight reduction, a 10-dB reduction in the transmitted noise level, and a system reliability of 5000 hours mean-time-between-removal for the transmission. This paper presents the results of the design studies conducted to meet the stated goals for an advanced rotorcraft transmission. These design studies include system configuration, planetary gear train selection, and reliability prediction methods.

  14. Getting It Right Matters: Climate Spectra and Their Estimation

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor; Yushkov, Vladislav

    2018-06-01

    In many recent publications, climate spectra estimated with different methods from observed, GCM-simulated, and reconstructed time series contain many peaks at time scales from a few years to many decades and even centuries. However, respective spectral estimates obtained with the autoregressive (AR) and multitapering (MTM) methods showed that spectra of climate time series are smooth and contain no evidence of periodic or quasi-periodic behavior. Four order selection criteria for the autoregressive models were studied and proven sufficiently reliable for 25 time series of climate observations at individual locations or spatially averaged at local-to-global scales. As time series of climate observations are short, an alternative reliable nonparametric approach is Thomson's MTM. These results agree with both the earlier climate spectral analyses and the Markovian stochastic model of climate.

  15. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  16. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-08-17

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  17. Development of a diagnostic test set to assess agreement in breast pathology: practical application of the Guidelines for Reporting Reliability and Agreement Studies (GRRAS).

    PubMed

    Oster, Natalia V; Carney, Patricia A; Allison, Kimberly H; Weaver, Donald L; Reisch, Lisa M; Longton, Gary; Onega, Tracy; Pepe, Margaret; Geller, Berta M; Nelson, Heidi D; Ross, Tyler R; Tosteson, Aanna N A; Elmore, Joann G

    2013-02-05

    Diagnostic test sets are a valuable research tool that contributes importantly to the validity and reliability of studies that assess agreement in breast pathology. In order to fully understand the strengths and weaknesses of any agreement and reliability study, however, the methods should be fully reported. In this paper we provide a step-by-step description of the methods used to create four complex test sets for a study of diagnostic agreement among pathologists interpreting breast biopsy specimens. We use the newly developed Guidelines for Reporting Reliability and Agreement Studies (GRRAS) as a basis to report these methods. Breast tissue biopsies were selected from the National Cancer Institute-funded Breast Cancer Surveillance Consortium sites. We used a random sampling stratified according to woman's age (40-49 vs. ≥50), parenchymal breast density (low vs. high) and interpretation of the original pathologist. A 3-member panel of expert breast pathologists first independently interpreted each case using five primary diagnostic categories (non-proliferative changes, proliferative changes without atypia, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma). When the experts did not unanimously agree on a case diagnosis a modified Delphi method was used to determine the reference standard consensus diagnosis. The final test cases were stratified and randomly assigned into one of four unique test sets. We found GRRAS recommendations to be very useful in reporting diagnostic test set development and recommend inclusion of two additional criteria: 1) characterizing the study population and 2) describing the methods for reference diagnosis, when applicable.

  18. KRAS mutation testing in colorectal cancer: comparison of the results obtained using 3 different methods for the analysis of codons G12 and G13.

    PubMed

    Bihl, Michel P; Hoeller, Sylvia; Andreozzi, Maria Carla; Foerster, Anja; Rufle, Alexander; Tornillo, Luigi; Terracciano, Luigi

    2012-03-01

    Targeting the epidermal growth factor receptor (EGFR) is a new therapeutic option for patients with metastatic colorectal or lung carcinoma. However, the therapy efficiency highly depends on the KRAS mutation status in the given tumour. Therefore a reliable and secure KRAS mutation testing is crucial. Here we investigated 100 colorectal carcinoma samples with known KRAS mutation status (62 mutated cases and 38 wild type cases) in a comparative manner with three different KRAS mutation testing techniques (Pyrosequencing, Dideoxysequencing and INFINITI) in order to test their reliability and sensitivity. For the large majority of samples (96/100, 96%), the KRAS mutation status obtained by all three methods was the same. Only two cases with clear discrepancies were observed. One case was reported as wild type by the INFINITI method while the two other methods detected a G13C mutation. In the second case the mutation could be detected by the Pyrosequencing and INFINITI method (15% and 15%), while no signal for mutation could be observed with the Dideoxysequencing method. Additional two unclear results were due to a detection of a G12V with the INFINITI method, which was below cut-off when repeated and which was not detectable by the other two methods and very weak signals in a G12V mutated case with the Dideoxy- and Pyroseqencing method compared to the INFINITI method, respectively. In summary all three methods are reliable and robust methods in detecting KRAS mutations. INFINITI, however seems to be slightly more sensitive compared to Dideoxy- and Pyrosequencing.

  19. First-principles studiesy of the order-disorder phase transition in FeCo using Wang-Landau Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Pei, Zongrui; Eisenbach, Markus; Stocks, G. Malcolm

    Simulating order-disorder phase transitions in magnetic materials requires the accurate treatment of both the atomic and magnetic interactions, which span a vast configuration space. Using FeCo as a prototype system, we demonstrate that this can be addressed by combining the Locally Self-consistent Multiple Scattering (LSMS) method with the Wang-Landau (WL) Monte-Carlo algorithm. Fe-Co based materials are interesting magnetic materials but a reliable phase diagram of the binary Fe-Co system is still difficult to obtain. Using the combined WL-LSMS method we clarify the existence of the disordered A2 phase and predict the Curie temperature between it and the ordered B2 phase. The WL-LSMS method is readily applicable to the study of second-order phase transitions in other binary and multi-component alloys, thereby providing a means to the direct simulation of order-disorder phase transitions in complex alloys without need of intervening classical model Hamiltonians. We also demonstrate the capability of our method to guide the design of new magnetic materials. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.

  20. Accuracy and reliability of pulp/tooth area ratio in upper canines by peri-apical X-rays.

    PubMed

    Azevedo, A C; Michel-Crosato, E; Biazevic, M G H; Galić, I; Merelli, V; De Luca, S; Cameriere, R

    2014-11-01

    Due to the real need for careful staff training in age assessment, in order to improve capacity, consistency and competence, new research on the reliability and repeatability of methods frequently used in age assessment are required. The aim of this study was twofold: first, to test the accuracy of this method for age estimation; second, to obtain data on the reliability of this technique. A sample of 81 peri-apical radiographs of upper canines (44 men and 37 women), aged between 19 and 74years, was used; the teeth were taken from the osteological collection of Sassari (Sardinia, Italy). Three blinded observers used the technique in order to perform the age estimation. The mean real age of the 81 observations was 37.21 (CI95% 34.37 40.05), and estimated ages ranged from 36.65 to 38.99 (CI95%-Ex1 35.42; 41.28; CI95%-Ex2 33.89; 39.41; CI95%-Ex3 35.92; 42.06). The module differences found by the three observers were 3.43, 4.24 and 4.45, respectively for Ex1×Ex2, Ex1×Ex3 and Ex2×Ex3. The module differences observed among real and observed ages were 2.55 (CI95% 1.90; 3.20), 2.22 (CI95% 1.65; 2.78) and 4.39 (CI95% 3.80; 5.75), respectively for Ex1, Ex2 and Ex3. No differences were observed among measurements. This technique can be reproduced and repeated after proper training, since it was found high reliability and accuracy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. The definition and evaluation of the skills required to obtain a patient's history of illness: the use of videotape recordings

    PubMed Central

    Anderson, J.; Dowling, M. A. C.; Day, J. L.; Pettingale, K. W.

    1970-01-01

    Videotape recording apparatus was used to make records of case histories obtained from patients by students and doctors. These records were studied in order to identify the skills required to obtain a patient's history of illness. Each skill was defined. A questionnaire was developed in order to assess these skills and three independent observers watched the records of eighteen students and completed a questionnaire for each. The results of this were analysed for reliability and reproducibility between examiners. Moderate reliability and reproducibility were demonstrated. The questionnaire appeared to be a valid method of assessment and was capable of providing significant discrimination between students for each skill. A components analysis suggested that the marks for each skill depend on an overall impression obtained by each examiner and this overall impression is influenced by different skills for each examiner. PMID:5488220

  2. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...

  3. Reliability of Hull Girder Ultimate Strength of Steel Ships

    NASA Astrophysics Data System (ADS)

    Da-wei, Gao; Gui-jie, Shi

    2018-03-01

    Hull girder ultimate strength is an evaluation index reflecting the true safety margin or structural redundancy about container ships. Especially, after the hull girder fracture accident of the MOL COMFORT, the 8,000TEU class large container ship, on June 17 2013, larger container ship safety has been paid on much more attention. In this paper, different methods of calculating hull girder ultimate strength are firstly discussed and compared with. The bending ultimate strength can be analyzed by nonlinear finite element method (NFEM) and increment-iterative method, and also the shear ultimate strength can be analyzed by NFEM and simple equations. Then, the probability distribution of hull girder wave loads and still water loads of container ship are summarized. At last, the reliability of hull girder ultimate strength under bending moment and shear forces for three container ships is analyzed by using a first order method. The conclusions can be applied to give guidance for ship design and safety evaluation.

  4. Application of the Reference Method Isotope Dilution Gas Chromatography Mass Spectrometry (ID/GC/MS) to Establish Metrological Traceability for Calibration and Control of Blood Glucose Test Systems

    PubMed Central

    Andreis, Elisabeth; Küllmer, Kai

    2014-01-01

    Self-monitoring of blood glucose (BG) by means of handheld BG systems is a cornerstone in diabetes therapy. The aim of this article is to describe a procedure with proven traceability for calibration and evaluation of BG systems to guarantee reliable BG measurements. Isotope dilution gas chromatography mass spectrometry (ID/GC/MS) is a method that fulfills all requirements to be used in a higher-order reference measurement procedure. However, this method is not applicable for routine measurements because of the time-consuming sample preparation. A hexokinase method with perchloric acid (PCA) sample pretreatment is used in a measurement procedure for such purposes. This method is directly linked to the ID/GC/MS method by calibration with a glucose solution that has an ID/GC/MS-determined target value. BG systems are calibrated with whole blood samples. The glucose levels in such samples are analyzed by this ID/GC/MS-linked hexokinase method to establish traceability to higher-order reference material. For method comparison, the glucose concentrations in 577 whole blood samples were measured using the PCA-hexokinase method and the ID/GC/MS method; this resulted in a mean deviation of 0.1%. The mean deviation between BG levels measured in >500 valid whole blood samples with BG systems and the ID/GC/MS was 1.1%. BG systems allow a reliable glucose measurement if a true reference measurement procedure, with a noninterrupted traceability chain using ID/GC/MS linked hexokinase method for calibration of BG systems, is implemented. Systems should be calibrated by means of a traceable and defined measurement procedure to avoid bias. PMID:24876614

  5. Reliability of cervical vertebral maturation staging.

    PubMed

    Rainey, Billie-Jean; Burnside, Girvan; Harrison, Jayne E

    2016-07-01

    Growth and its prediction are important for the success of many orthodontic treatments. The aim of this study was to determine the reliability of the cervical vertebral maturation (CVM) method for the assessment of mandibular growth. A group of 20 orthodontic clinicians, inexperienced in CVM staging, was trained to use the improved version of the CVM method for the assessment of mandibular growth with a teaching program. They independently assessed 72 consecutive lateral cephalograms, taken at Liverpool University Dental Hospital, on 2 occasions. The cephalograms were presented in 2 different random orders and interspersed with 11 additional images for standardization. The intraobserver and interobserver agreement values were evaluated using the weighted kappa statistic. The intraobserver and interobserver agreement values were substantial (weighted kappa, 0.6-0.8). The overall intraobserver agreement was 0.70 (SE, 0.01), with average agreement of 89%. The interobserver agreement values were 0.68 (SE, 0.03) for phase 1 and 0.66 (SE, 0.03) for phase 2, with average interobserver agreement of 88%. The intraobserver and interobserver agreement values of classifying the vertebral stages with the CVM method were substantial. These findings demonstrate that this method of CVM classification is reproducible and reliable. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  6. Information fusion based techniques for HEVC

    NASA Astrophysics Data System (ADS)

    Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; Meyer-Baese, Uwe; Meyer-Baese, Anke; Grecos, Christos

    2017-05-01

    Aiming at the conflict circumstances of multi-parameter H.265/HEVC encoder system, the present paper introduces the analysis of many optimizations' set in order to improve the trade-off between quality, performance and power consumption for different reliable and accurate applications. This method is based on the Pareto optimization and has been tested with different resolutions on real-time encoders.

  7. Nanofabrication

    DOEpatents

    Tuominen, Mark; Bal, Mustafa; Russell, Thomas P.; Ursache, Andrei

    2007-03-13

    Pathways to rapid and reliable fabrication of three-dimensional nanostructures are provided. Simple methods are described for the production of well-ordered, multilevel nanostructures. This is accomplished by patterning block copolymer templates with selective exposure to a radiation source. The resulting multi-scale lithographic template can be treated with post-fabrication steps to produce multilevel, three-dimensional, integrated nanoscale media, devices, and systems.

  8. Identifying Exemplary Science Teachers through Students' Perceptions of Their Learning Environment

    ERIC Educational Resources Information Center

    Waldrip, Bruce G.; Fisher, Darrell L.; Dorman, Jeffrey

    2009-01-01

    The purpose of this study was to examine students' psychosocial perceptions of their science classroom learning environment in order to identify exemplary teachers. This mixed-method study used the valid and reliable What Is Happening In this Class? (WIHIC) questionnaire with over 3,000 middle school students in 150 classrooms in Australia.…

  9. Materials thermal and thermoradiative properties/characterization technology

    NASA Technical Reports Server (NTRS)

    Dewitt, D. P.; Ho, C. Y.

    1989-01-01

    Reliable properties data on well characterized materials are necessary for design of experiments and interpretation of experimental results. The activities of CINDAS to provide data bases and predict properties are discussed. An understanding of emissivity behavior is important in order to select appropriate methods for non-contact temperature determination. Related technical issues are identified and recommendations are offered.

  10. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  11. A systematic review of publications assessing reliability and validity of the Behavioral Risk Factor Surveillance System (BRFSS), 2004–2011

    PubMed Central

    2013-01-01

    Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed. Methods In order to assess the reliability and validity of prevalence estimates taken from the BRFSS, scholarship published from 2004–2011 dealing with tests of reliability and validity of BRFSS measures was compiled and presented by topics of health risk behavior. Assessments of the quality of each publication were undertaken using a categorical rubric. Higher rankings were achieved by authors who conducted reliability tests using repeated test/retest measures, or who conducted tests using multiple samples. A similar rubric was used to rank validity assessments. Validity tests which compared the BRFSS to physical measures were ranked higher than those comparing the BRFSS to other self-reported data. Literature which undertook more sophisticated statistical comparisons was also ranked higher. Results Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response. BRFSS prevalence rates were less similar to surveys which utilize physical measures in addition to self-reported data. There is very little research on reliability and validity for some health topics, but a great deal of information supporting the validity of the BRFSS data for others. Conclusions Limitations of the examination of the BRFSS were due to question differences among surveys used as comparisons, as well as mode of data collection differences. As the BRFSS moves to incorporating cell phone data and changing weighting methods, a review of reliability and validity research indicated that past BRFSS landline only data were reliable and valid as measured against other surveys. New analyses and comparisons of BRFSS data which include the new methodologies and cell phone data will be needed to ascertain the impact of these changes on estimates in the future. PMID:23522349

  12. Direct versus indirect many-body methods for calculating vertical electron affinities: applications to F -, OH - , NH 2-, CN -, Cl -, SH - and PH 2-

    NASA Astrophysics Data System (ADS)

    Ortiz, J. V.

    1987-05-01

    Electron propagator theory (EPT) is applied to calculating vertical ionization energies of the anions F -, Cl -, OH -,SH -, NH 2-, PH 2- and CN -. Third-order and outer valence approximation (OVA) quasiparticle calculations are compared with ΔMBPT(4) (MBPT, many-body perturbation theory) results using the same basis sets. Agreement with experiment is satisfactory for EPT calculations except for F - and OH -, while the ΔMBPT treatments fail for CN -. EPT(OVA) estimates are reliable when the discrepancy between second- and third-order results is small. Computational aspects are discussed, showing relative merits of direct and indirect methods for evaluating electron binding energies.

  13. Electronic Versus Paper-Based Assessment of Health-Related Quality of Life Specific to HIV Disease: Reliability Study of the PROQOL-HIV Questionnaire

    PubMed Central

    Lalanne, Christophe; Goujard, Cécile; Herrmann, Susan; Cheung-Lung, Christian; Brosseau, Jean-Paul; Schwartz, Yannick; Chassany, Olivier

    2014-01-01

    Background Electronic patient-reported outcomes (PRO) provide quick and usually reliable assessments of patients’ health-related quality of life (HRQL). Objective An electronic version of the Patient-Reported Outcomes Quality of Life-human immunodeficiency virus (PROQOL-HIV) questionnaire was developed, and its face validity and reliability were assessed using standard psychometric methods. Methods A sample of 80 French outpatients (66% male, 52/79; mean age 46.7 years, SD 10.9) were recruited. Paper-based and electronic questionnaires were completed in a randomized crossover design (2-7 day interval). Biomedical data were collected. Questionnaire version and order effects were tested on full-scale scores in a 2-way ANOVA with patients as random effects. Test-retest reliability was evaluated using Pearson and intraclass correlation coefficients (ICC, with 95% confidence interval) for each dimension. Usability testing was carried out from patients’ survey reports, specifically, general satisfaction, ease of completion, quality and clarity of user interface, and motivation to participate in follow-up PROQOL-HIV electronic assessments. Results Questionnaire version and administration order effects (N=59 complete cases) were not significant at the 5% level, and no interaction was found between these 2 factors (P=.94). Reliability indexes were acceptable, with Pearson correlations greater than .7 and ICCs ranging from .708 to .939; scores were not statistically different between the two versions. A total of 63 (79%) complete patients’ survey reports were available, and 55% of patients (30/55) reported being satisfied and interested in electronic assessment of their HRQL in clinical follow-up. Individual ratings of PROQOL-HIV user interface (85%-100% of positive responses) confirmed user interface clarity and usability. Conclusions The electronic PROQOL-HIV introduces minor modifications to the original paper-based version, following International Society for Pharmacoeconomics and Outcomes Research (ISPOR) ePRO Task Force guidelines, and shows good reliability and face validity. Patients can complete the computerized PROQOL-HIV questionnaire and the scores from the paper or electronic versions share comparable accuracy and interpretation. PMID:24769643

  14. Evaluation of potential emission spectra for the reliable classification of fluorescently coded materials

    NASA Astrophysics Data System (ADS)

    Brunner, Siegfried; Kargel, Christian

    2011-06-01

    The conservation and efficient use of natural and especially strategic resources like oil and water have become global issues, which increasingly initiate environmental and political activities for comprehensive recycling programs. To effectively reutilize oil-based materials necessary in many industrial fields (e.g. chemical and pharmaceutical industry, automotive, packaging), appropriate methods for a fast and highly reliable automated material identification are required. One non-contacting, color- and shape-independent new technique that eliminates the shortcomings of existing methods is to label materials like plastics with certain combinations of fluorescent markers ("optical codes", "optical fingerprints") incorporated during manufacture. Since time-resolved measurements are complex (and expensive), fluorescent markers must be designed that possess unique spectral signatures. The number of identifiable materials increases with the number of fluorescent markers that can be reliably distinguished within the limited wavelength band available. In this article we shall investigate the reliable detection and classification of fluorescent markers with specific fluorescence emission spectra. These simulated spectra are modeled based on realistic fluorescence spectra acquired from material samples using a modern VNIR spectral imaging system. In order to maximize the number of materials that can be reliably identified, we evaluate the performance of 8 classification algorithms based on different spectral similarity measures. The results help guide the design of appropriate fluorescent markers, optical sensors and the overall measurement system.

  15. The Reliability and Validity of the Computerized Double Inclinometer in Measuring Lumbar Mobility

    PubMed Central

    MacDermid, Joy Christine; Arumugam, Vanitha; Vincent, Joshua Israel; Carroll, Krista L

    2014-01-01

    Study Design : Repeated measures reliability/validity study. Objectives : To determine the concurrent validity, test-retest, inter-rater and intra-rater reliability of lumbar flexion and extension measurements using the Tracker M.E. computerized dual inclinometer (CDI) in comparison to the modified-modified Schober (MMS) Summary of Background : Numerous studies have evaluated the reliability and validity of the various methods of measuring spinal motion, but the results are inconsistent. Differences in equipment and techniques make it difficult to correlate results. Methods : Twenty subjects with back pain and twenty without back pain were selected through convenience sampling. Two examiners measured sagittal plane lumbar range of motion for each subject. Two separate tests with the CDI and one test with the MMS were conducted. Each test consisted of three trials. Instrument and examiner order was randomly assigned. Intra-class correlations (ICCs 2, 2 and 2, 2) and Pearson correlation coefficients (r) were used to calculate reliability and concurrent validity respectively. Results : Intra-trial reliability was high to very high for both the CDI (ICCs 0.85 - 0.96) and MMS (ICCs 0.84 - 0.98). However, the reliability was poor to moderate, when the CDI unit had to be repositioned either by the same rate (ICCs 0.16 - 0.59) or a different rater (ICCs 0.45 - 0.52). Inter-rater reliability for the MMS was moderate to high (ICCs 0.75 - 0.82) which bettered the moderate correlation obtained for the CDI (ICCs 0.45 - 0.52). Correlations between the CDI and MMS were poor for flexion (0.32; p<0.05) and poor to moderate (-0.42 - -0.51; p<0.05) for extension measurements. Conclusion : When using the CDI, an average of subsequent tests is required to obtain moderate reliability. The MMS was highly reliable than the CDI. The MMS and the CDI measure lumbar movement on a different metric that are not highly related to each other. PMID:25352928

  16. A Second-Order Confirmatory Factor Analysis of the Moral Distress Scale-Revised for Nurses.

    PubMed

    Sharif Nia, Hamid; Shafipour, Vida; Allen, Kelly-Ann; Heidari, Mohammad Reza; Yazdani-Charati, Jamshid; Zareiyan, Armin

    2017-01-01

    Moral distress is a growing problem for healthcare professionals that may lead to dissatisfaction, resignation, or occupational burnout if left unattended, and nurses experience different levels of this phenomenon. This study aims to investigate the factor structure of the Persian version of the Moral Distress Scale-Revised in intensive care and general nurses. This methodological research was conducted with 771 nurses from eight hospitals in the Mazandaran Province of Iran in 2017. Participants completed the Moral Distress Scale-Revised, data collected, and factor structure assessed using the construct, convergent, and divergent validity methods. The reliability of the scale was assessed using internal consistency (Cronbach's alpha, Theta, and McDonald's omega coefficients) and construct reliability. Ethical considerations: This study was approved by the Ethics Committee of Mazandaran University of Medical Sciences. The exploratory factor analysis ( N = 380) showed that the Moral Distress Scale-Revised has five factors: lack of professional competence at work, ignoring ethical issues and patient conditions, futile care, carrying out the physician's orders without question and unsafe care, and providing care under personal and organizational pressures, which explained 56.62% of the overall variance. The confirmatory factor analysis ( N = 391) supported the five-factor solution and the second-order latent factor model. The first-order model did not show a favorable convergent and divergent validity. Ultimately, the Moral Distress Scale-Revised was found to have a favorable internal consistency and construct reliability. The Moral Distress Scale-Revised was found to be a multidimensional construct. The data obtained confirmed the hypothesis of the factor structure model with a latent second-order variable. Since the convergent and divergent validity of the scale were not confirmed in this study, further assessment is necessary in future studies.

  17. Certified dual-corrected radiation patterns of phased antenna arrays by offline–online order reduction of finite-element models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de

    2015-10-15

    This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.

  18. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  19. A hyperspectral image optimizing method based on sub-pixel MTF analysis

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie

    2015-04-01

    Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.

  20. Fine reservoir structure modeling based upon 3D visualized stratigraphic correlation between horizontal wells: methodology and its application

    NASA Astrophysics Data System (ADS)

    Chenghua, Ou; Chaochun, Li; Siyuan, Huang; Sheng, James J.; Yuan, Xu

    2017-12-01

    As the platform-based horizontal well production mode has been widely applied in petroleum industry, building a reliable fine reservoir structure model by using horizontal well stratigraphic correlation has become very important. Horizontal wells usually extend between the upper and bottom boundaries of the target formation, with limited penetration points. Using these limited penetration points to conduct well deviation correction means the formation depth information obtained is not accurate, which makes it hard to build a fine structure model. In order to solve this problem, a method of fine reservoir structure modeling, based on 3D visualized stratigraphic correlation among horizontal wells, is proposed. This method can increase the accuracy when estimating the depth of the penetration points, and can also effectively predict the top and bottom interfaces in the horizontal penetrating section. Moreover, this method will greatly increase not only the number of points of depth data available, but also the accuracy of these data, which achieves the goal of building a reliable fine reservoir structure model by using the stratigraphic correlation among horizontal wells. Using this method, four 3D fine structure layer models have been successfully built of a specimen shale gas field with platform-based horizontal well production mode. The shale gas field is located to the east of Sichuan Basin, China; the successful application of the method has proven its feasibility and reliability.

  1. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  2. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  3. [The choice of color in fixed prosthetics: what steps should be followed for a reliable outcome?].

    PubMed

    Vanheusden, Alain; Mainjot, Amélie

    2004-01-01

    The creation of a perfectly-matched esthetic fixed restoration is undeniably one of the most difficult challenges in modern dentistry. The final outcome depends on several essential steps: the use of an appropriate light source, the accurate analysis and correct evaluation of patient's teeth parameters (morphology, colour, surface texture,...), the clear and precise transmission of this data to the laboratory and the sound interpretation of it by a dental technician who masters esthetic prosthetic techniques perfectly. The purpose of this paper was to give a reproducible clinical method to the practitioner in order to achieve a reliable dental colorimetric analysis.

  4. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  5. Validation of the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch Analysis.

    PubMed

    Garzón Umerenkova, Angélica; de la Fuente Arias, Jesús; Martínez-Vicente, José Manuel; Zapata Sevillano, Lucía; Pichardo, Mari Carmen; García-Berbén, Ana Belén

    2017-01-01

    Background: The aim of the study was to psychometrically characterize the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch analysis. Materials and Methods: 831 Spaniard university students (262 men), between 17 and 39 years of age and ranging from the first to the 5th year of studies, completed the SSSRQ questionnaire. Confirmatory factor analysis (CFA) was carried out in order to establish structural adequacy. Afterward, by means of the Rasch model, a study of each sub scale was conducted to test for dimensionality, fit of the sample questions, functionality of the response categories, reliability and estimation of Differential Item Functioning by gender and course. Results: The four sub-scales comply with the unidimensionality criteria, the questions are in line with the model, the response categories operate properly and the reliability of the sample is acceptable. Nonetheless, the test could benefit from the inclusion of additional items of both high and low difficulty in order to increase construct validity, discrimination and reliability for the respondents. Several items with differences in gender and course were also identified. Discussion: The results evidence the need and adequacy of this complementary psychometric analysis strategy, in relation to the CFA to enhance the instrument.

  6. Validation of the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch Analysis

    PubMed Central

    Garzón Umerenkova, Angélica; de la Fuente Arias, Jesús; Martínez-Vicente, José Manuel; Zapata Sevillano, Lucía; Pichardo, Mari Carmen; García-Berbén, Ana Belén

    2017-01-01

    Background: The aim of the study was to psychometrically characterize the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch analysis. Materials and Methods: 831 Spaniard university students (262 men), between 17 and 39 years of age and ranging from the first to the 5th year of studies, completed the SSSRQ questionnaire. Confirmatory factor analysis (CFA) was carried out in order to establish structural adequacy. Afterward, by means of the Rasch model, a study of each sub scale was conducted to test for dimensionality, fit of the sample questions, functionality of the response categories, reliability and estimation of Differential Item Functioning by gender and course. Results: The four sub-scales comply with the unidimensionality criteria, the questions are in line with the model, the response categories operate properly and the reliability of the sample is acceptable. Nonetheless, the test could benefit from the inclusion of additional items of both high and low difficulty in order to increase construct validity, discrimination and reliability for the respondents. Several items with differences in gender and course were also identified. Discussion: The results evidence the need and adequacy of this complementary psychometric analysis strategy, in relation to the CFA to enhance the instrument. PMID:28298898

  7. Neurology objective structured clinical examination reliability using generalizability theory

    PubMed Central

    Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.

    2015-01-01

    Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851

  8. Reliable fusion of control and sensing in intelligent machines. Thesis

    NASA Technical Reports Server (NTRS)

    Mcinroy, John E.

    1991-01-01

    Although robotics research has produced a wealth of sophisticated control and sensing algorithms, very little research has been aimed at reliably combining these control and sensing strategies so that a specific task can be executed. To improve the reliability of robotic systems, analytic techniques are developed for calculating the probability that a particular combination of control and sensing algorithms will satisfy the required specifications. The probability can then be used to assess the reliability of the design. An entropy formulation is first used to quickly eliminate designs not capable of meeting the specifications. Next, a framework for analyzing reliability based on the first order second moment methods of structural engineering is proposed. To ensure performance over an interval of time, lower bounds on the reliability of meeting a set of quadratic specifications with a Gaussian discrete time invariant control system are derived. A case study analyzing visual positioning in robotic system is considered. The reliability of meeting timing and positioning specifications in the presence of camera pixel truncation, forward and inverse kinematic errors, and Gaussian joint measurement noise is determined. This information is used to select a visual sensing strategy, a kinematic algorithm, and a discrete compensator capable of accomplishing the desired task. Simulation results using PUMA 560 kinematic and dynamic characteristics are presented.

  9. Safety, reliability, maintainability and quality provisions for the Space Shuttle program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This publication establishes common safety, reliability, maintainability and quality provisions for the Space Shuttle Program. NASA Centers shall use this publication both as the basis for negotiating safety, reliability, maintainability and quality requirements with Shuttle Program contractors and as the guideline for conduct of program safety, reliability, maintainability and quality activities at the Centers. Centers shall assure that applicable provisions of the publication are imposed in lower tier contracts. Centers shall give due regard to other Space Shuttle Program planning in order to provide an integrated total Space Shuttle Program activity. In the implementation of safety, reliability, maintainability and quality activities, consideration shall be given to hardware complexity, supplier experience, state of hardware development, unit cost, and hardware use. The approach and methods for contractor implementation shall be described in the contractors safety, reliability, maintainability and quality plans. This publication incorporates provisions of NASA documents: NHB 1700.1 'NASA Safety Manual, Vol. 1'; NHB 5300.4(IA), 'Reliability Program Provisions for Aeronautical and Space System Contractors'; and NHB 5300.4(1B), 'Quality Program Provisions for Aeronautical and Space System Contractors'. It has been tailored from the above documents based on experience in other programs. It is intended that this publication be reviewed and revised, as appropriate, to reflect new experience and to assure continuing viability.

  10. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment

    PubMed Central

    Seo, Aria; Kim, Yeichang

    2017-01-01

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709

  11. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.

    PubMed

    Seo, Aria; Jeong, Junho; Kim, Yeichang

    2017-08-13

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.

  12. Verification of Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Behar-Lafenetre, Stephanie; Cornillon, Laurence; Rancurel, Michael; De Graaf, Dennis; Hartmann, Peter; Coe, Graham; Laine, Benoit

    2012-07-01

    In the framework of the “Mechanical Design and Verification Methodologies for Ceramic Structures” contract [1] awarded by ESA, Thales Alenia Space has investigated literature and practices in affiliated industries to propose a methodological guideline for verification of ceramic spacecraft and instrument structures. It has been written in order to be applicable to most types of ceramic or glass-ceramic materials - typically Cesic®, HBCesic®, Silicon Nitride, Silicon Carbide and ZERODUR®. The proposed guideline describes the activities to be performed at material level in order to cover all the specific aspects of ceramics (Weibull distribution, brittle behaviour, sub-critical crack growth). Elementary tests and their post-processing methods are described, and recommendations for optimization of the test plan are given in order to have a consistent database. The application of this method is shown on an example in a dedicated article [7]. Then the verification activities to be performed at system level are described. This includes classical verification activities based on relevant standard (ECSS Verification [4]), plus specific analytical, testing and inspection features. The analysis methodology takes into account the specific behaviour of ceramic materials, especially the statistical distribution of failures (Weibull) and the method to transfer it from elementary data to a full-scale structure. The demonstration of the efficiency of this method is described in a dedicated article [8]. The verification is completed by classical full-scale testing activities. Indications about proof testing, case of use and implementation are given and specific inspection and protection measures are described. These additional activities are necessary to ensure the required reliability. The aim of the guideline is to describe how to reach the same reliability level as for structures made of more classical materials (metals, composites).

  13. Data mining-based coefficient of influence factors optimization of test paper reliability

    NASA Astrophysics Data System (ADS)

    Xu, Peiyao; Jiang, Huiping; Wei, Jieyao

    2018-05-01

    Test is a significant part of the teaching process. It demonstrates the final outcome of school teaching through teachers' teaching level and students' scores. The analysis of test paper is a complex operation that has the characteristics of non-linear relation in the length of the paper, time duration and the degree of difficulty. It is therefore difficult to optimize the coefficient of influence factors under different conditions in order to get text papers with clearly higher reliability with general methods [1]. With data mining techniques like Support Vector Regression (SVR) and Genetic Algorithm (GA), we can model the test paper analysis and optimize the coefficient of impact factors for higher reliability. It's easy to find that the combination of SVR and GA can get an effective advance in reliability from the test results. The optimal coefficient of influence factors optimization has a practicability in actual application, and the whole optimizing operation can offer model basis for test paper analysis.

  14. State recovery and lockstep execution restart in a system with multiprocessor pairing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less

  15. Method of excess fractions with application to absolute distance metrology: wavelength selection and the effects of common error sources.

    PubMed

    Falaggis, Konstantinos; Towers, David P; Towers, Catherine E

    2012-09-20

    Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.

  16. Ballistic Puncture Self-Healing Polymeric Materials

    NASA Technical Reports Server (NTRS)

    Gordon, Keith L.; Siochi, Emilie J.; Yost, William T.; Bogert, Phil B.; Howell, Patricia A.; Cramer, K. Elliott; Burke, Eric R.

    2017-01-01

    Space exploration launch costs on the order of $10,000 per pound provide an incentive to seek ways to reduce structural mass while maintaining structural function to assure safety and reliability. Damage-tolerant structural systems provide a route to avoiding weight penalty while enhancing vehicle safety and reliability. Self-healing polymers capable of spontaneous puncture repair show promise to mitigate potentially catastrophic damage from events such as micrometeoroid penetration. Effective self-repair requires these materials to quickly heal following projectile penetration while retaining some structural function during the healing processes. Although there are materials known to possess this capability, they are typically not considered for structural applications. Current efforts use inexpensive experimental methods to inflict damage, after which analytical procedures are identified to verify that function is restored. Two candidate self-healing polymer materials for structural engineering systems are used to test these experimental methods.

  17. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  18. Integrating structure-based and ligand-based approaches for computational drug design.

    PubMed

    Wilson, Gregory L; Lill, Markus A

    2011-04-01

    Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.

  19. Practical Framework: Implementing OEE Method in Manufacturing Process Environment

    NASA Astrophysics Data System (ADS)

    Maideen, N. C.; Sahudin, S.; Mohd Yahya, N. H.; Norliawati, A. O.

    2016-02-01

    Manufacturing process environment requires reliable machineries in order to be able to satisfy the market demand. Ideally, a reliable machine is expected to be operated and produce a quality product at its maximum designed capability. However, due to some reason, the machine usually unable to achieved the desired performance. Since the performance will affect the productivity of the system, a measurement technique should be applied. Overall Equipment Effectiveness (OEE) is a good method to measure the performance of the machine. The reliable result produced from OEE can then be used to propose a suitable corrective action. There are a lot of published paper mentioned about the purpose and benefit of OEE that covers what and why factors. However, the how factor not yet been revealed especially the implementation of OEE in manufacturing process environment. Thus, this paper presents a practical framework to implement OEE and a case study has been discussed to explain in detail each steps proposed. The proposed framework is beneficial to the engineer especially the beginner to start measure their machine performance and later improve the performance of the machine.

  20. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  1. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    NASA Astrophysics Data System (ADS)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  2. Methods for assessing the quality of data in public health information systems: a critical review.

    PubMed

    Chen, Hong; Yu, Ping; Hailey, David; Wang, Ning

    2014-01-01

    The quality of data in public health information systems can be ensured by effective data quality assessment. In order to conduct effective data quality assessment, measurable data attributes have to be precisely defined. Then reliable and valid measurement methods for data attributes have to be used to measure each attribute. We conducted a systematic review of data quality assessment methods for public health using major databases and well-known institutional websites. 35 studies were eligible for inclusion in the study. A total of 49 attributes of data quality were identified from the literature. Completeness, accuracy and timeliness were the three most frequently assessed attributes of data quality. Most studies directly examined data values. This is complemented by exploring either data users' perception or documentation quality. However, there are limitations of current data quality assessment methods: a lack of consensus on attributes measured; inconsistent definition of the data quality attributes; a lack of mixed methods for assessing data quality; and inadequate attention to reliability and validity. Removal of these limitations is an opportunity for further improvement.

  3. 78 FR 18333 - North American Electric Reliability Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... Electric Reliability Corporation; Notice of Filing Take notice that on March 11, 2013, the North American Electric Reliability Corporation and SERC Reliability Corporation submitted a compliance filing in accordance with the Federal Energy Regulatory Commission's Order (FERC or Commission) in Regional Reliability...

  4. Modeling Self-Heating Effects in Nanoscale Devices

    NASA Astrophysics Data System (ADS)

    Raleva, K.; Shaik, A. R.; Vasileska, D.; Goodnick, S. M.

    2017-08-01

    Accurate thermal modeling and the design of microelectronic devices and thin film structures at the micro- and nanoscales poses a challenge to electrical engineers who are less familiar with the basic concepts and ideas in sub-continuum heat transport. This book aims to bridge that gap. Efficient heat removal methods are necessary to increase device performance and device reliability. The authors provide readers with a combination of nanoscale experimental techniques and accurate modeling methods that must be employed in order to determine a device's temperature profile.

  5. Quantification of the Relationship between Surrogate Fuel Structure and Performance

    DTIC Science & Technology

    2012-07-31

    order to account for know deficiencies [18]. The frequencies are then used to calculate the zero point energy ( ZPE ). In the G3 theory HF/6-31G* was used...for the ZPE and the new procedure is likely to be more reliable. Also in contrast to previous G series composite methods, the Hartree–Fock energy...The total energy is obtained by adding the previously calculated ZPE . Durant and Rohlfing [38] reported that B3LYP density functional methods provide

  6. Observer-based monitoring of heat exchangers.

    PubMed

    Astorga-Zaragoza, Carlos-Manuel; Alvarado-Martínez, Víctor-Manuel; Zavala-Río, Arturo; Méndez-Ocaña, Rafael-Maxim; Guerrero-Ramírez, Gerardo-Vicente

    2008-01-01

    The goal of this work is to provide a method for monitoring performance degradation in counter-flow double-pipe heat exchangers. The overall heat transfer coefficient is estimated by an adaptive observer and monitored in order to infer when the heat exchanger needs preventive or corrective maintenance. A simplified mathematical model is used to synthesize the adaptive observer and a more complex model is used for simulation. The reliability of the proposed method was demonstrated via numerical simulations and laboratory experiments with a bench-scale pilot plant.

  7. Calibrations of the LHD Thomson scattering system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, I., E-mail: yamadai@nifs.ac.jp; Funaba, H.; Yasuhara, R.

    2016-11-15

    The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.

  8. Calibrations of the LHD Thomson scattering system.

    PubMed

    Yamada, I; Funaba, H; Yasuhara, R; Hayashi, H; Kenmochi, N; Minami, T; Yoshikawa, M; Ohta, K; Lee, J H; Lee, S H

    2016-11-01

    The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.

  9. Standardization for Ki-67 Assessment in Moderately Differentiated Breast Cancer. A Retrospective Analysis of the SAKK 28/12 Study

    PubMed Central

    Varga, Zsuzsanna; Cassoly, Estelle; Li, Qiyu; Oehlschlegel, Christian; Tapia, Coya; Lehr, Hans Anton; Klingbiel, Dirk; Thürlimann, Beat; Ruhstaller, Thomas

    2015-01-01

    Background Proliferative activity (Ki-67 Labelling Index) in breast cancer increasingly serves as an additional tool in the decision for or against adjuvant chemotherapy in midrange hormone receptor positive breast cancer. Ki-67 Index has been previously shown to suffer from high inter-observer variability especially in midrange (G2) breast carcinomas. In this study we conducted a systematic approach using different Ki-67 assessments on large tissue sections in order to identify the method with the highest reliability and the lowest variability. Materials and Methods Five breast pathologists retrospectively analyzed proliferative activity of 50 G2 invasive breast carcinomas using large tissue sections by assessing Ki-67 immunohistochemistry. Ki-67-assessments were done on light microscopy and on digital images following these methods: 1) assessing five regions, 2) assessing only darkly stained nuclei and 3) considering only condensed proliferative areas (‘hotspots’). An individual review (the first described assessment from 2008) was also performed. The assessments on light microscopy were done by estimating. All measurements were performed three times. Inter-observer and intra-observer reliabilities were calculated using the approach proposed by Eliasziw et al. Clinical cutoffs (14% and 20%) were tested using Fleiss’ Kappa. Results There was a good intra-observer reliability in 5 of 7 methods (ICC: 0.76–0.89). The two highest inter-observer reliability was fair to moderate (ICC: 0.71 and 0.74) in 2 methods (region-analysis and individual-review) on light microscopy. Fleiss’-kappa-values (14% cut-off) were the highest (moderate) using the original recommendation on light-microscope (Kappa 0.58). Fleiss’ kappa values (20% cut-off) were the highest (Kappa 0.48 each) in analyzing hotspots on light-microscopy and digital-analysis. No methodologies using digital-analysis were superior to the methods on light microscope. Conclusion Our results show that all methods on light-microscopy for Ki-67 assessment in large tissue sections resulted in a good intra-observer reliability. Region analysis and individual review (the original recommendation) on light-microscopy yielded the highest inter-observer reliability. These results show slight improvement to previously published data on poor-reproducibility and thus might be a practical-pragmatic way for routine assessment of Ki-67 Index in G2 breast carcinomas. PMID:25885288

  10. Ordering of Z-numbers

    NASA Astrophysics Data System (ADS)

    Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah

    2017-08-01

    The concept of Z-number which was introduced by Zadeh in 2010 has captured attention by many due to its enormous applications in the area of Computing with Words (CWW). A Z-number is an ordered pair of fuzzy numbers, (A, R), where A essentially plays the role of fuzzy restriction which is a real-valued uncertain variable and R is a measure of reliability of the first component. Besides its theoretical development, Z-numbers have been successfully applied to decision making problems under uncertain environment. In any decision making evaluation using Z-number, ideally the final outcome of the calculation should also be in Z-number. A question will arise: how do we order the Z-numbers so that the preference of the alternatives can be ranked appropriately? In this paper, we propose a method of ordering the Z-number via the transformation of the Z-numbers to fuzzy numbers. The Z-number will then be ranked using a ranking fuzzy number method. The proposed method will be tested in several combinations of Z-numbers to investigate its effectiveness. The effect of different values of A and R towards the ordering of Z-numbers is analyzed and discussed.

  11. Designing optimal universal pulses using second-order, large-scale, non-linear optimization

    NASA Astrophysics Data System (ADS)

    Anand, Christopher Kumar; Bain, Alex D.; Curtis, Andrew Thomas; Nie, Zhenghua

    2012-06-01

    Recently, RF pulse design using first-order and quasi-second-order pulses has been actively investigated. We present a full second-order design method capable of incorporating relaxation, inhomogeneity in B0 and B1. Our model is formulated as a generic optimization problem making it easy to incorporate diverse pulse sequence features. To tame the computational cost, we present a method of calculating second derivatives in at most a constant multiple of the first derivative calculation time, this is further accelerated by using symbolic solutions of the Bloch equations. We illustrate the relative merits and performance of quasi-Newton and full second-order optimization with a series of examples, showing that even a pulse already optimized using other methods can be visibly improved. To be useful in CPMG experiments, a universal refocusing pulse should be independent of the delay time and insensitive of the relaxation time and RF inhomogeneity. We design such a pulse and show that, using it, we can obtain reliable R2 measurements for offsets within ±γB1. Finally, we compare our optimal refocusing pulse with other published refocusing pulses by doing CPMG experiments.

  12. Two new termite (Isoptera: Rhinotermitidae) feeding indexes for woods of varing palatability

    Treesearch

    Chris J. Peterson; P.D. Gerard

    2009-01-01

    In order for bait matrices, treated wood and resistant wood species to be properly evaluated in the laboratory for termite resistance or palatability, reliable tests that can distinguish between food choices must be developed; otherwise, inferior products may enter the marketplace. In the current study, a bioassay method is proposed that allows the calculation of two...

  13. Examining the Psychometric Properties of Acceptance Scale for Kindergarten-Revised (ASK-R) in Turkish

    ERIC Educational Resources Information Center

    Ersan, Deniz Tekin; Ata, Seda; Kaya, Sinem

    2017-01-01

    The aim of this study is to investigate the applicability of ASK-R among Turkish pre-school children and determine the psychometric properties (validity and reliability) of this scale. This study was conducted with 167 students from different pre-schools in Mugla. Within the aim of the study, three different methods were set down in order to…

  14. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  15. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGES

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  16. Micro-Raman spectroscopic identification of bacterial cells of the genus Staphylococcus and dependence on their cultivation conditions.

    PubMed

    Harz, M; Rösch, P; Peschke, K-D; Ronneberger, O; Burkhardt, H; Popp, J

    2005-11-01

    Microbial contamination is not only a medical problem, but also plays a large role in pharmaceutical clean room production and food processing technology. Therefore many techniques were developed to achieve differentiation and identification of microorganisms. Among these methods vibrational spectroscopic techniques (IR, Raman and SERS) are useful tools because of their rapidity and sensitivity. Recently we have shown that micro-Raman spectroscopy in combination with a support vector machine is an extremely capable approach for a fast and reliable, non-destructive online identification of single bacteria belonging to different genera. In order to simulate different environmental conditions we analyzed in this contribution different Staphylococcus strains with varying cultivation conditions in order to evaluate our method with a reliable dataset. First, micro-Raman spectra of the bulk material and single bacterial cells that were grown under the same conditions were recorded and used separately for a distinct chemotaxonomic classification of the strains. Furthermore Raman spectra were recorded from single bacterial cells that were cultured under various conditions to study the influence of cultivation on the discrimination ability. This dataset was analyzed both with a hierarchical cluster analysis (HCA) and a support vector machine (SVM).

  17. A low-cost efficient multiplex PCR for prenatal sex determination in bovine fetus using free fetal DNA in maternal plasma.

    PubMed

    Davoudi, Arash; Seighalani, Ramin; Aleyasin, Seyed Ahmad; Tarang, Alireza; Salehi, Abdolreza Salehi; Tahmoressi, Farideh

    2012-04-01

    In order to establish a reliable non-invasive method for sex determination in a bovine fetus in a routine setting, the possibility of identifying specific sequence in the fetal X and Y-chromosomes has been evaluated in maternal plasma using conventional multiplex polymerase chain reaction (PCR) analysis. The aim of this study was to provide a rapid and reliable method for sexing bovine fetuses. In this experimental study, peripheral blood samples were taken from 38 pregnant heifers with 8 to 38 weeks of gestation. DNA template was extracted by phenol-chloroform method from 350 µl maternal plasma. Two primer pairs for bovine amelogenin gene (bAML) and BC1.2 were used to amplify fragments from X and Y chromosomes. A multiplex PCR reaction has been optimized for amplification of 467 bp and 341 bp fragments from X and Y bAML gene and a 190 bp fragment from BC1.2 related to Y chromosome. The 467 bp fragment was observed in all 38 samples. Both 341 and 190 bp fragments were detected only in 24 plasma samples from male calves. The sensitivity and specificity of test were 100% with no false negative or false positive results. The results showed that phenol-chloroform method is a simple and suitable method for isolation of fetal DNA in maternal plasma. The multiplex PCR method is an available non-invasive approach which is cost efficient and reliable for sexing bovine fetuses.

  18. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  19. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  20. Characterizing wind power resource reliability in southern Africa

    DOE PAGES

    Fant, Charles; Gunturu, Bhaskar; Schlosser, Adam

    2015-08-29

    Producing electricity from wind is attractive because it provides a clean, low-maintenance power supply. However, wind resource is intermittent on various timescales, thus occasionally introducing large and sudden changes in power supply. A better understanding of this variability can greatly benefit power grid planning. In the following study, wind resource is characterized using metrics that highlight these intermittency issues; therefore identifying areas of high and low wind power reliability in southern Africa and Kenya at different time-scales. After developing a wind speed profile, these metrics are applied at various heights in order to assess the added benefit of raising themore » wind turbine hub. Furthermore, since the interconnection of wind farms can aid in reducing the overall intermittency, the value of interconnecting near-by sites is mapped using two distinct methods. Of the countries in this region, the Republic of South Africa has shown the most interest in wind power investment. For this reason, we focus parts of the study on wind reliability in the country. The study finds that, although mean Wind Power Density is high in South Africa compared to its neighboring countries, wind power resource tends to be less reliable than in other parts of southern Africa—namely central Tanzania. We also find that South Africa’s potential varies over different timescales, with higher reliability in the summer than winter, and higher reliability during the day than at night. This study is concluded by introducing two methods and measures to characterize the value of interconnection, including the use of principal component analysis to identify areas with a common signal.« less

  1. Characterizing wind power resource reliability in southern Africa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fant, Charles; Gunturu, Bhaskar; Schlosser, Adam

    Producing electricity from wind is attractive because it provides a clean, low-maintenance power supply. However, wind resource is intermittent on various timescales, thus occasionally introducing large and sudden changes in power supply. A better understanding of this variability can greatly benefit power grid planning. In the following study, wind resource is characterized using metrics that highlight these intermittency issues; therefore identifying areas of high and low wind power reliability in southern Africa and Kenya at different time-scales. After developing a wind speed profile, these metrics are applied at various heights in order to assess the added benefit of raising themore » wind turbine hub. Furthermore, since the interconnection of wind farms can aid in reducing the overall intermittency, the value of interconnecting near-by sites is mapped using two distinct methods. Of the countries in this region, the Republic of South Africa has shown the most interest in wind power investment. For this reason, we focus parts of the study on wind reliability in the country. The study finds that, although mean Wind Power Density is high in South Africa compared to its neighboring countries, wind power resource tends to be less reliable than in other parts of southern Africa—namely central Tanzania. We also find that South Africa’s potential varies over different timescales, with higher reliability in the summer than winter, and higher reliability during the day than at night. This study is concluded by introducing two methods and measures to characterize the value of interconnection, including the use of principal component analysis to identify areas with a common signal.« less

  2. A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases

    PubMed Central

    Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357

  3. A novel method to handle the effect of uneven sampling effort in biodiversity databases.

    PubMed

    Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.

  4. A fast and reliable readout method for quantitative analysis of surface-enhanced Raman scattering nanoprobes on chip surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Hyejin; Jeong, Sinyoung; Ko, Eunbyeol

    2015-05-15

    Surface-enhanced Raman scattering techniques have been widely used for bioanalysis due to its high sensitivity and multiplex capacity. However, the point-scanning method using a micro-Raman system, which is the most common method in the literature, has a disadvantage of extremely long measurement time for on-chip immunoassay adopting a large chip area of approximately 1-mm scale and confocal beam point of ca. 1-μm size. Alternative methods such as sampled spot scan with high confocality and large-area scan method with enlarged field of view and low confocality have been utilized in order to minimize the measurement time practically. In this study, wemore » analyzed the two methods in respect of signal-to-noise ratio and sampling-led signal fluctuations to obtain insights into a fast and reliable readout strategy. On this basis, we proposed a methodology for fast and reliable quantitative measurement of the whole chip area. The proposed method adopted a raster scan covering a full area of 100 μm × 100 μm region as a proof-of-concept experiment while accumulating signals in the CCD detector for single spectrum per frame. One single scan with 10 s over 100 μm × 100 μm area yielded much higher sensitivity compared to sampled spot scanning measurements and no signal fluctuations attributed to sampled spot scan. This readout method is able to serve as one of key technologies that will bring quantitative multiplexed detection and analysis into practice.« less

  5. Quantitation of permethylated N-glycans through multiple-reaction monitoring (MRM) LC-MS/MS.

    PubMed

    Zhou, Shiyue; Hu, Yunli; DeSantos-Garcia, Janie L; Mechref, Yehia

    2015-04-01

    The important biological roles of glycans and their implications in disease development and progression have created a demand for the development of sensitive quantitative glycomics methods. Quantitation of glycans existing at low abundance is still analytically challenging. In this study, an N-linked glycans quantitation method using multiple-reaction monitoring (MRM) on a triple quadrupole instrument was developed. Optimum normalized collision energy (CE) for both sialylated and fucosylated N-glycan was determined to be 30%, whereas it was found to be 35% for either fucosylated or sialylated N-glycans. The optimum CE for mannose and complex type N-glycan was determined to be 35%. Additionally, the use of three transitions was shown to facilitate reliable quantitation. A total of 88 N-glycan compositions in human blood serum were quantified using this MRM approach. Reliable detection and quantitation of these glycans was achieved when the equivalence of 0.005 μL of blood serum was analyzed. Accordingly, N-glycans down to the 100th of a μL level can be reliably quantified in pooled human blood serum, spanning a dynamic concentration range of three orders of magnitude. MRM was also effectively utilized to quantitatively compare the expression of N-glycans derived from brain-targeting breast carcinoma cells (MDA-MB-231BR) and metastatic breast cancer cells (MDA-MB-231). Thus, the described MRM method of permethylated N-glycan enables a rapid and reliable identification and quantitation of glycans derived from glycoproteins purified or present in complex biological samples.

  6. Quantitation of Permethylated N-Glycans through Multiple-Reaction Monitoring (MRM) LC-MS/MS

    NASA Astrophysics Data System (ADS)

    Zhou, Shiyue; Hu, Yunli; DeSantos-Garcia, Janie L.; Mechref, Yehia

    2015-04-01

    The important biological roles of glycans and their implications in disease development and progression have created a demand for the development of sensitive quantitative glycomics methods. Quantitation of glycans existing at low abundance is still analytically challenging. In this study, an N-linked glycans quantitation method using multiple-reaction monitoring (MRM) on a triple quadrupole instrument was developed. Optimum normalized collision energy (CE) for both sialylated and fucosylated N-glycan was determined to be 30%, whereas it was found to be 35% for either fucosylated or sialylated N-glycans. The optimum CE for mannose and complex type N-glycan was determined to be 35%. Additionally, the use of three transitions was shown to facilitate reliable quantitation. A total of 88 N-glycan compositions in human blood serum were quantified using this MRM approach. Reliable detection and quantitation of these glycans was achieved when the equivalence of 0.005 μL of blood serum was analyzed. Accordingly, N-glycans down to the 100th of a μL level can be reliably quantified in pooled human blood serum, spanning a dynamic concentration range of three orders of magnitude. MRM was also effectively utilized to quantitatively compare the expression of N-glycans derived from brain-targeting breast carcinoma cells (MDA-MB-231BR) and metastatic breast cancer cells (MDA-MB-231). Thus, the described MRM method of permethylated N-glycan enables a rapid and reliable identification and quantitation of glycans derived from glycoproteins purified or present in complex biological samples.

  7. Translated Versions of Voice Handicap Index (VHI)-30 across Languages: A Systematic Review

    PubMed Central

    SEIFPANAHI, Sadegh; JALAIE, Shohreh; NIKOO, Mohammad Reza; SOBHANI-RAD, Davood

    2015-01-01

    Background: In this systematic review, the aim is to investigate different VHI-30 versions between languages regarding their validity, reliability and their translation process. Methods: Articles were extracted systematically from some of the prime databases including Cochrane, googlescholar, MEDLINE (via PubMed gate), Sciencedirect, Web of science, and their reference lists by Voice Handicap Index keyword with only title limitation and time of publication (from 1997 to 2014). However the other limitations (e.g. excluding non-English, other versions of VHI ones, and so on) applied manually after studying the papers. In order to appraise the methodology of the papers, three authors did it by 12-item diagnostic test checklist in “Critical Appraisal Skills Programme” or (CASP) site. After applying all of the screenings, the papers that had the study eligibility criteria such as; translation, validity, and reliability processes, included in this review. Results: The remained non-repeated articles were 12 from different languages. All of them reported validity, reliability and translation method, which presented in details in this review. Conclusion: Mainly the preferred method for translation in the gathered papers was “Brislin’s classic back-translation model (1970), although the procedure was not performed completely but it was more prominent than other translation procedures. High test-retest reliability, internal consistency and moderate construct validity between different languages in regards to all 3 VHI-30 domains confirm the applicability of translated VHI-30 version across languages. PMID:26056664

  8. Orbital-optimized third-order Møller-Plesset perturbation theory and its spin-component and spin-opposite scaled variants: application to symmetry breaking problems.

    PubMed

    Bozkaya, Uğur

    2011-12-14

    In this research, orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) and its spin-component and spin-opposite scaled variants (SCS-OMP3 and SOS-OMP3) are introduced. Using a Lagrangian-based approach, an efficient, quadratically convergent algorithm for variational optimization of the molecular orbitals (MOs) for third-order Møller-Plesset perturbation theory (MP3) is presented. Explicit equations for response density matrices, the MO gradient, and Hessian are reported in spin-orbital form. The OMP3, SCS-OMP3, and SOS-OMP3 approaches are compared with the second-order Møller-Plesset perturbation theory (MP2), MP3, coupled-cluster doubles (CCD), optimized-doubles (OD), and coupled-cluster singles and doubles (CCSD) methods. All these methods are applied to the O(4)(+), O(3), and seven diatomic molecules. Results demonstrate that the OMP3 and its variants provide significantly better vibrational frequencies than MP3, CCSD, and OD for the molecules where the symmetry-breaking problems are observed. For O(4)(+), the OMP3 prediction, 1343 cm(-1), for ω(6) (b(3u)) mode, where symmetry-breaking appears, is even better than presumably more reliable methods such as Brueckner doubles (BD), 1194 cm(-1), and OD, 1193 cm(-1), methods (the experimental value is 1320 cm(-1)). For O(3), the predictions of SCS-OMP3 (1143 cm(-1)) and SOS-OMP3 (1165 cm(-1)) are remarkably better than the more robust OD method (1282 cm(-1)); the experimental value is 1089 cm(-1). For the seven diatomics, again the SCS-OMP3 and SOS-OMP3 methods provide the lowest average errors, ∣Δω(e)∣ = 44 and ∣Δω(e)∣ = 35 cm(-1), respectively, while for OD, ∣Δω(e)∣ = 161 cm(-1)and CCSD ∣Δω(e)∣ = 106 cm(-1). Hence, the OMP3 and especially its spin-scaled variants perform much better than the MP3, CCSD, and more robust OD approaches for considered test cases. Therefore, considering both the computational cost and the reliability, SCS-OMP3 and SOS-OMP3 appear to be the best methods for the symmetry-breaking cases, based on present application results. The OMP3 method offers certain advantages: it provides reliable vibrational frequencies in case of symmetry-breaking problems, especially with spin-scaling tricks, its analytic gradients are easier to compute since there is no need to solve the coupled-perturbed equations for the orbital response, and the computation of one-electron properties are easier because there is no response contribution to the particle density matrices. The OMP3 has further advantages over standard MP3, making it promising for excited state properties via linear response theory. © 2011 American Institute of Physics

  9. Finite and spectral cell method for wave propagation in heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Duczek, Sascha; Gabbert, Ulrich; Düster, Alexander

    2014-09-01

    In the current paper we present a fast, reliable technique for simulating wave propagation in complex structures made of heterogeneous materials. The proposed approach, the spectral cell method, is a combination of the finite cell method and the spectral element method that significantly lowers preprocessing and computational expenditure. The spectral cell method takes advantage of explicit time-integration schemes coupled with a diagonal mass matrix to reduce the time spent on solving the equation system. By employing a fictitious domain approach, this method also helps to eliminate some of the difficulties associated with mesh generation. Besides introducing a proper, specific mass lumping technique, we also study the performance of the low-order and high-order versions of this approach based on several numerical examples. Our results show that the high-order version of the spectral cell method together requires less memory storage and less CPU time than other possible versions, when combined simultaneously with explicit time-integration algorithms. Moreover, as the implementation of the proposed method in available finite element programs is straightforward, these properties turn the method into a viable tool for practical applications such as structural health monitoring [1-3], quantitative ultrasound applications [4], or the active control of vibrations and noise [5, 6].

  10. 78 FR 44557 - Revision to Transmission Vegetation Management Reliability Standard; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-24

    ... Transmission Vegetation Management Reliability Standard; Notice of Compliance Filing Take notice that on July 12, 2013, the North American Electric Reliability Corporation (NERC), pursuant to Order No. 777 \\1... Reliability Standard FAC-003-2 to its Web site. \\1\\ Revisions to Reliability Standard for Transmission...

  11. Approach for discrimination and quantification of electroactive species: kinetics difference revealed by higher harmonics of Fourier transformed sinusoidal voltammetry.

    PubMed

    Fang, Yishan; Huang, Xinjian; Wang, Lishi

    2015-01-06

    Discrimination and quantification of electroactive species are traditionally realized by a potential difference which is mainly determined by thermodynamics. However, the resolution of this approach is limited to tens of millivolts. In this paper, we described an application of Fourier transformed sinusoidal voltammetry (FT-SV) that provides a new approach for discrimination and quantitative evaluation of electroactive species, especially thermodynamic similar ones. Numerical simulation indicates that electron transfer kinetics difference between electroactive species can be revealed by the phase angle of higher order harmonics of FT-SV, and the difference can be amplified order by order. Thus, even a very subtle kinetics difference can be amplified to be distinguishable at a certain order of harmonics. This method was verified with structurally similar ferrocene derivatives which were chosen as the model systems. Although these molecules have very close redox potential (<10 mV), discrimination and selective detection were achieved by as high as the thirteenth harmonics. The results demonstrated the feasibility and reliability of the method. It was also implied that the combination of the traditional thermodynamic method and this kinetics method can form a two-dimension resolved detection method, and it has the potential to extend the resolution of voltammetric techniques to a new level.

  12. Method of implementing digital phase-locked loops

    NASA Technical Reports Server (NTRS)

    Stephens, Scott A. (Inventor); Thomas, Jess Brooks, Jr. (Inventor)

    1993-01-01

    In a new formulation for digital phase-locked loops, loop-filter constants are determined from loop roots that can each be selectively placed in the s-plane on the basis of a new set of parameters, each with simple and direct physical meaning in terms of loop noise bandwidth, root-specific decay rate, or root-specific damping. Loops of first to fourth order are treated in the continuous-update approximation (BLT yields 0) and in a discrete-update formulation with arbitrary BLT. Deficiencies of the continuous-update approximation in large-BLT applications are avoided in the new discrete-update formulation. A new method for direct, transient-free acquisition with third- and fourth-order loops can improve the versatility and reliability of acquisition with such loops.

  13. The effect of the stability threshold on time to stabilization and its reliability following a single leg drop jump landing.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2016-02-08

    We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Model for the prediction of subsurface strata movement due to underground mining

    NASA Astrophysics Data System (ADS)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the proposed prediction model are thus validated.

  15. Development and psychometric evaluation of the Premarital Sexual Behavior Assessment Scale for Young Women (PSAS-YW): an exploratory mixed method study

    PubMed Central

    2014-01-01

    Background Premarital sexual behaviors are important issue for women’s health. The present study was designed to develop and examine the psychometric properties of a scale in order to identify young women who are at greater risk of premarital sexual behavior. Method This was an exploratory mixed method investigation. Indeed, the study was conducted in two phases. In the first phase, qualitative methods (focus group discussion and individual interview) were applied to generate items and develop the questionnaire. In the second phase, psychometric properties (validity and reliability) of the questionnaire were assessed. Results In the first phase an item pool containing 53 statements related to premarital sexual behavior was generated. In the second phase item reduction was applied and the final version of the questionnaire containing 26 items was developed. The psychometric properties of this final version were assessed and the results showed that the instrument has a good structure, and reliability. The results from exploratory factory analysis indicated a 5-factor solution for the instrument that jointly accounted for the 57.4% of variance observed. The Cronbach’s alpha coefficient for the instrument was found to be 0.87. Conclusion This study provided a valid and reliable scale to identify premarital sexual behavior in young women. Assessment of premarital sexual behavior might help to improve women’s sexual abstinence. PMID:24924696

  16. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    PubMed Central

    Mendes-Santos, Liana Chaves; Mograbi, Daniel; Spenciere, Bárbara; Charchat-Fichman, Helenice

    2015-01-01

    The Clock Drawing Test (CDT) is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. Objective The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989). Methods We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn") and Mini-Mental State Examination (MMSE) were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. Results and Conclusion A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated"), equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%). The CDT specific algorithm method used had high inter-rater reliability (p<0.01), and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging. PMID:29213954

  17. Defect recognition in CFRP components using various NDT methods within a smart manufacturing process

    NASA Astrophysics Data System (ADS)

    Schumacher, David; Meyendorf, Norbert; Hakim, Issa; Ewert, Uwe

    2018-04-01

    The manufacturing process of carbon fiber reinforced polymer (CFRP) components is gaining a more and more significant role when looking at the increasing amount of CFRPs used in industries today. The monitoring of the manufacturing process and hence the reliability of the manufactured products, is one of the major challenges we need to face in the near future. Common defects which arise during manufacturing process are e.g. porosity and voids which may lead to delaminations during operation and under load. To find irregularities and classify them as possible defects in an early stage of the manufacturing process is of high importance for the safety and reliability of the finished products, as well as of significant impact from an economical point of view. In this study we compare various NDT methods which were applied to similar CFRP laminate samples in order to detect and characterize regions of defective volume. Besides ultrasound, thermography and eddy current, different X-ray methods like radiography, laminography and computed tomography are used to investigate the samples. These methods are compared with the intention to evaluate their capability to reliably detect and characterize defective volume. Beyond the detection and evaluation of defects, we also investigate possibilities to combine various NDT methods within a smart manufacturing process in which the decision which method shall be applied is inherent within the process. Is it possible to design an in-line or at-line testing process which can recognize defects reliably and reduce testing time and costs? This study aims to show up opportunities of designing a smart NDT process synchronized to the production based on the concepts of smart production (Industry 4.0). A set of defective CFRP laminate samples and different NDT methods were used to demonstrate how effective defects are recognized and how communication between interconnected NDT sensors and the manufacturing process could be organized.

  18. Contamination-Free Manufacturing: Tool Component Qualification, Verification and Correlation with Wafers

    NASA Astrophysics Data System (ADS)

    Tan, Samantha H.; Chen, Ning; Liu, Shi; Wang, Kefei

    2003-09-01

    As part of the semiconductor industry "contamination-free manufacturing" effort, significant emphasis has been placed on reducing potential sources of contamination from process equipment and process equipment components. Process tools contain process chambers and components that are exposed to the process environment or process chemistry and in some cases are in direct contact with production wafers. Any contamination from these sources must be controlled or eliminated in order to maintain high process yields, device performance, and device reliability. This paper discusses new nondestructive analytical methods for quantitative measurement of the cleanliness of metal, quartz, polysilicon and ceramic components that are used in process equipment tools. The goal of these new procedures is to measure the effectiveness of cleaning procedures and to verify whether a tool component part is sufficiently clean for installation and subsequent routine use in the manufacturing line. These procedures provide a reliable "qualification method" for tool component certification and also provide a routine quality control method for reliable operation of cleaning facilities. Cost advantages to wafer manufacturing include higher yields due to improved process cleanliness and elimination of yield loss and downtime resulting from the installation of "bad" components in process tools. We also discuss a representative example of wafer contamination having been linked to a specific process tool component.

  19. Optical detection of metastatic cancer cells using a scanned laser pico-projection system

    NASA Astrophysics Data System (ADS)

    Huang, Chih-Ling; Chiu, Wen-Tai; Lo, Yu-Lung; Chuang, Chin-Ho; Chen, Yu-Bin; Chang, Shu-Jing; Ke, Tung-Ting; Cheng, Hung-Chi; Wu, Hua-Lin

    2015-03-01

    Metastasis is responsible for 90% of all cancer-related deaths in humans. As a result, reliable techniques for detecting metastatic cells are urgently required. Although various techniques have been proposed for metastasis detection, they are generally capable of detecting metastatic cells only once migration has already occurred. Accordingly, the present study proposes an optical method for physical characterization of metastatic cancer cells using a scanned laser pico-projection system (SLPP). The validity of the proposed method is demonstrated using five pairs of cancer cell lines and two pairs of non-cancer cell lines treated by IPTG induction in order to mimic normal cells with an overexpression of oncogene. The results show that for all of the considered cell lines, the SLPP speckle contrast of the high-metastatic cells is significantly higher than that of the low-metastatic cells. As a result, the speckle contrast measurement provides a reliable means of distinguishing quantitatively between low- and high-metastatic cells of the same origin. Compared to existing metastasis detection methods, the proposed SLPP approach has many advantages, including a higher throughput, a lower cost, a larger sample size and a more reliable diagnostic performance. As a result, it provides a highly promising solution for physical characterization of metastatic cancer cells in vitro.

  20. Scheduling and Pricing for Expected Ramp Capability in Real-Time Power Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ela, Erik; O'Malley, Mark

    2016-05-01

    Higher variable renewable generation penetrations are occurring throughout the world on different power systems. These resources increase the variability and uncertainty on the system which must be accommodated by an increase in the flexibility of the system resources in order to maintain reliability. Many scheduling strategies have been discussed and introduced to ensure that this flexibility is available at multiple timescales. To meet variability, that is, the expected changes in system conditions, two recent strategies have been introduced: time-coupled multi-period market clearing models and the incorporation of ramp capability constraints. To appropriately evaluate these methods, it is important to assessmore » both efficiency and reliability. But it is also important to assess the incentive structure to ensure that resources asked to perform in different ways have the proper incentives to follow these directions, which is a step often ignored in simulation studies. We find that there are advantages and disadvantages to both approaches. We also find that look-ahead horizon length in multi-period market models can impact incentives. This paper proposes scheduling and pricing methods that ensure expected ramps are met reliably, efficiently, and with associated prices based on true marginal costs that incentivize resources to do as directed by the market. Case studies show improvements of the new method.« less

  1. Transcultural adaptation and psychometric properties of Spanish version of Pregnancy Physical Activity Questionnaire: the PregnActive project.

    PubMed

    Oviedo-Caro, Miguel Ángel; Bueno-Antequera, Javier; Munguía-Izquierdo, Diego

    2018-03-19

    To transculturally adapt the Spanish version of Pregnancy Physical Activity Questionnaire (PPAQ) analyzing its psychometric properties. The PPAQ was transculturally adapted into Spanish. Test-retest reliability was evaluated in a subsample of 109 pregnant women. The validity was evaluated in a sample of 208 pregnant women who answered the questionnaire and wore the multi-sensor monitor for 7 valid days. The reliability (intraclass correlation coefficient), concordance (concordance correlation coefficient), correlation (Pearson correlation coefficient), agreement (Bland-Altman plots) and relative activity levels (Jonckheere-Terpstra test) between both administrations and methods were examined. Intraclass correlation coefficients between both administrations were good for all categories except transportation. A low but significant correlation was found for total activity (light and above) whereas no correlation was found for other intensities between both methods. Relative activity levels analysis showed a significant linear trend for increased total activity between both methods. Spanish version of PPAQ is a brief and easily interpretable questionnaire with good reliability and ability to rank individuals, and poor validity compared with multi-sensor monitor. The use of PPAQ provides information of pregnancy-specific activities in order to establish physical activity levels of pregnant women and adapt health promotion interventions. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport

    NASA Astrophysics Data System (ADS)

    Munk, Madicken

    The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.

  3. Reliability Testing of NASA Piezocomposite Actuators

    NASA Technical Reports Server (NTRS)

    Wilkie, W.; High, J.; Bockman, J.

    2002-01-01

    NASA Langley Research Center has developed a low-cost piezocomposite actuator which has application for controlling vibrations in large inflatable smart space structures, space telescopes, and high performance aircraft. Tests show the NASA piezocomposite device is capable of producing large, directional, in-plane strains on the order of 2000 parts-per-million peak-to-peak, with no reduction in free-strain performance to 100 million electrical cycles. This paper describes methods, measurements, and preliminary results from our reliability evaluation of the device under externally applied mechanical loads and at various operational temperatures. Tests performed to date show no net reductions in actuation amplitude while the device was moderately loaded through 10 million electrical cycles. Tests were performed at both room temperature and at the maximum operational temperature of the epoxy resin system used in manufacture of the device. Initial indications are that actuator reliability is excellent, with no actuator failures or large net reduction in actuator performance.

  4. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  5. Power factor regulation for household usage

    NASA Astrophysics Data System (ADS)

    Daud, Nik Ghazali Nik; Hashim, Fakroul Ridzuan; Tarmizi, Muhammad Haziq Ahmad

    2018-02-01

    Power factor regulator technology has recently drawn attention to the consumer and to power generation company in order for consumers to use electricity efficiently. Controlling of power factor for efficient usage can reduce the production of power in fulfilment demands hence reducing the greenhouse effect. This paper presents the design method of power factor controller for household usage. There are several methods to improve the power factor. The power factor controller used by this method is by using capacitors. Total harmonic distortion also has become a major problem for the reliability of the electrical appliances and techniques to control it will be discussed.

  6. Reliability testing of a portfolio assessment tool for postgraduate family medicine training in South Africa

    PubMed Central

    Mash, Bob; Derese, Anselme

    2013-01-01

    Abstract Background Competency-based education and the validity and reliability of workplace-based assessment of postgraduate trainees have received increasing attention worldwide. Family medicine was recognised as a speciality in South Africa six years ago and a satisfactory portfolio of learning is a prerequisite to sit the national exit exam. A massive scaling up of the number of family physicians is needed in order to meet the health needs of the country. Aim The aim of this study was to develop a reliable, robust and feasible portfolio assessment tool (PAT) for South Africa. Methods Six raters each rated nine portfolios from the Stellenbosch University programme, using the PAT, to test for inter-rater reliability. This rating was repeated three months later to determine test–retest reliability. Following initial analysis and feedback the PAT was modified and the inter-rater reliability again assessed on nine new portfolios. An acceptable intra-class correlation was considered to be > 0.80. Results The total score was found to be reliable, with a coefficient of 0.92. For test–retest reliability, the difference in mean total score was 1.7%, which was not statistically significant. Amongst the subsections, only assessment of the educational meetings and the logbook showed reliability coefficients > 0.80. Conclusion This was the first attempt to develop a reliable, robust and feasible national portfolio assessment tool to assess postgraduate family medicine training in the South African context. The tool was reliable for the total score, but the low reliability of several sections in the PAT helped us to develop 12 recommendations regarding the use of the portfolio, the design of the PAT and the training of raters.

  7. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours.more » In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters« less

  8. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  9. Mallard age and sex determination from wings

    USGS Publications Warehouse

    Carney, S.M.; Geis, A.D.

    1960-01-01

    This paper describes characters on the wing plumage of the mallard that indicate age and sex. A key outlines a logical order in which to check age and sex characters on wings. This method was tested and found to be more than 95 percent reliable, although it was found that considerable practice and training with known-age specimens was required to achieve this level of accuracy....The implications of this technique and the sampling procedure it permits are discussed. Wing collections could provide information on production, and, if coupled with a banding program could permit seasonal population estimates to be calculated. In addition, representative samples of wings would provide data to check the reliability of several other waterfowl surveys.

  10. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  11. Face Liveness Detection Using Defocus

    PubMed Central

    Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun

    2015-01-01

    In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594

  12. DRS: Derivational Reasoning System

    NASA Technical Reports Server (NTRS)

    Bose, Bhaskar

    1995-01-01

    The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.

  13. Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2010-01-01

    Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.

  14. 78 FR 22773 - Revisions to Reliability Standard for Transmission Vegetation Management; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-17

    ...; Order No. 777] Revisions to Reliability Standard for Transmission Vegetation Management; Correction... other requirements the North American Electric Reliability Corporation (NERC) needs to submit when modifying certain Reliability Standards. DATES: Effective on May 28, 2013. FOR FURTHER INFORMATION CONTACT...

  15. Retest imaging of [11C]NOP-1A binding to nociceptin/orphanin FQ peptide (NOP) receptors in the brain of healthy humans.

    PubMed

    Lohith, Talakad G; Zoghbi, Sami S; Morse, Cheryl L; Araneta, Maria D Ferraris; Barth, Vanessa N; Goebl, Nancy A; Tauscher, Johannes T; Pike, Victor W; Innis, Robert B; Fujita, Masahiro

    2014-02-15

    [(11)C]NOP-1A is a novel high-affinity PET ligand for imaging nociceptin/orphanin FQ peptide (NOP) receptors. Here, we report reproducibility and reliability measures of binding parameter estimates for [(11)C]NOP-1A binding in the brain of healthy humans. After intravenous injection of [(11)C]NOP-1A, PET scans were conducted twice on eleven healthy volunteers on the same (10/11 subjects) or different (1/11 subjects) days. Subjects underwent serial sampling of radial arterial blood to measure parent radioligand concentrations. Distribution volume (VT; a measure of receptor density) was determined by compartmental (one- and two-tissue) modeling in large regions and by simpler regression methods (graphical Logan and bilinear MA1) in both large regions and voxel data. Retest variability and intraclass correlation coefficient (ICC) of VT were determined as measures of reproducibility and reliability respectively. Regional [(11)C]NOP-1A uptake in the brain was high, with a peak radioactivity concentration of 4-7 SUV (standardized uptake value) and a rank order of putamen>cingulate cortex>cerebellum. Brain time-activity curves fitted well in 10 of 11 subjects by unconstrained two-tissue compartmental model. The retest variability of VT was moderately good across brain regions except cerebellum, and was similar across different modeling methods, averaging 12% for large regions and 14% for voxel-based methods. The retest reliability of VT was also moderately good in most brain regions, except thalamus and cerebellum, and was similar across different modeling methods averaging 0.46 for large regions and 0.48 for voxels having gray matter probability >20%. The lowest retest variability and highest retest reliability of VT were achieved by compartmental modeling for large regions, and by the parametric Logan method for voxel-based methods. Moderately good reproducibility and reliability measures of VT for [(11)C]NOP-1A make it a useful PET ligand for comparing NOP receptor binding between different subject groups or under different conditions in the same subject. Copyright © 2013. Published by Elsevier Inc.

  16. Factors Influencing the Reliability of the Glasgow Coma Scale: A Systematic Review.

    PubMed

    Reith, Florence Cm; Synnot, Anneliese; van den Brande, Ruben; Gruen, Russell L; Maas, Andrew Ir

    2017-06-01

    The Glasgow Coma Scale (GCS) characterizes patients with diminished consciousness. In a recent systematic review, we found overall adequate reliability across different clinical settings, but reliability estimates varied considerably between studies, and methodological quality of studies was overall poor. Identifying and understanding factors that can affect its reliability is important, in order to promote high standards for clinical use of the GCS. The aim of this systematic review was to identify factors that influence reliability and to provide an evidence base for promoting consistent and reliable application of the GCS. A comprehensive literature search was undertaken in MEDLINE, EMBASE, and CINAHL from 1974 to July 2016. Studies assessing the reliability of the GCS in adults or describing any factor that influences reliability were included. Two reviewers independently screened citations, selected full texts, and undertook data extraction and critical appraisal. Methodological quality of studies was evaluated with the consensus-based standards for the selection of health measurement instruments checklist. Data were synthesized narratively and presented in tables. Forty-one studies were included for analysis. Factors identified that may influence reliability are education and training, the level of consciousness, and type of stimuli used. Conflicting results were found for experience of the observer, the pathology causing the reduced consciousness, and intubation/sedation. No clear influence was found for the professional background of observers. Reliability of the GCS is influenced by multiple factors and as such is context dependent. This review points to the potential for improvement from training and education and standardization of assessment methods, for which recommendations are presented. Copyright © 2017 by the Congress of Neurological Surgeons.

  17. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark; Schotter, Joerg; Thurn-Albrecht, Thomas; Russell, Thomas P.

    2007-03-13

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  18. Nanocylinder arrays

    DOEpatents

    Tuominen, Mark [Shutesbury, MA; Schotter, Joerg [Bielefeld, DE; Thurn-Albrecht, Thomas [Freiburg, DE; Russell, Thomas P [Amherst, MA

    2009-08-11

    Pathways to rapid and reliable fabrication of nanocylinder arrays are provided. Simple methods are described for the production of well-ordered arrays of nanopores, nanowires, and other materials. This is accomplished by orienting copolymer films and removing a component from the film to produce nanopores, that in turn, can be filled with materials to produce the arrays. The resulting arrays can be used to produce nanoscale media, devices, and systems.

  19. Curriculum Evaluation in Online Education: The Case of Teacher Candidates Preparing Online for Public Personnel Selection Examination

    ERIC Educational Resources Information Center

    Karacaoglu, Ömer Cem

    2018-01-01

    The aim of the present study is to evaluate the efficiency of an online curriculum based on the views of lecturers and students enrolled in the program. The study is mainly based on survey method. In order to collect qualitative data, interviews forms developed by the researcher were used. The reliability and validity of the interview forms were…

  20. Ethion exposure and biological monitoring in vegetable farmers.

    PubMed

    Kongtip, Pornpimol; Changfuang, Sirinnapa; Yoosook, Witaya; Chantanakul, Suttinun; Sujirarat, Dusit

    2011-03-01

    To modify the method of analysis of urinary diethyl phosphate (DEP) in order to determine the relationship between atmospheric ethion concentration and urinary DEP concentration. A cross-sectional study was conducted by collecting atmospheric ethion in the breathing zone of 28 farmers following the NIOSH 5600 method. Urine samples were also collected to analyze urinary DEP concentrations by a modified method using a Gas Chromatography-Flame Photometric Detector (FPD). The average atmospheric ethion concentration in the breathing zone of farmers was 0.036 +/- 0.018 mg/m3. The average urinary DEP in pre-shift and post-shift was 0.030 +/- 0.06 and 0.851 +/- 1.80 mg/g of creatinine respectively. The average DEP during work shifts was 0.53 +/- 0.27, and ranged from 0.12 to 1.16 mg/g of creatinine. A high correlation coefficient (r = 0.645) was found between atmospheric ethion concentrations and urinary DEP concentrations at p < 0.001. The modified method provided a reliable result and the urinary DEP during work shifts was found to be a reliable biomarker of ethion exposure.

  1. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  2. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  3. Reliability and validity of the Korean version of the community balance and mobility scale in patients with hemiplegia after stroke

    PubMed Central

    Lee, Kyoung-bo; Lee, Paul; Yoo, Sang-won; Kim, Young-dong

    2016-01-01

    [Purpose] The aim of this study was to translate and adapt the Community Balance and Mobility Scale (CB&M) into Korean (K-CB&M) and to verify the reliability and validity of scores obtained with Korean patients. [Subjects and Methods] A total of 16 subjects were recruited from St. Vincent’s Hospital in South Korea. At each testing session, subjects completed the K-CB&M, Berg balance scale (BBS), timed up and go test (TUG), and functional reaching test. All tests were administered by a physical therapist, and subjects completed the tests in an identical standardized order during all testing sessions. [Results] The inter- and intra-rater reliability coefficients were high for most subscores, while moderate inter-rater reliability was observed for the items “walking and looking” and “walk, look, and carry”, and moderate intra-rater reliability was observed for “forward to backward walking”. There was a positive correlation between the K-CB&M and BBS and a negative correlation between the K-CB&M and TUG in the convergent validity assessments. [Conclusion] The reliability and validity of the K-CB&M was high, suggesting that clinical practitioners treating Korean patients with hemiplegia can use this material for assessing static and dynamic balance. PMID:27630420

  4. Cross-cultural adaptation of the Portuguese version of the Patient-Generated Subjective Global Assessment.

    PubMed

    Duarte Bonini Campos, J A; Dias do Prado, C

    2012-01-01

    The cross-cultural adaptation of the Patient-Generated Subjective Global Assessment is important so it can be used with confidence in Portuguese language. To perform a cross-cultural adaptation of the Portuguese version of the Patient-Generated Subjective Global Assessment and to estimate its intrarater reliability. This is a validation study. Face Validity was classified by 17 health professionals and 10 Portuguese language specialists. Idiomatic, semantic, cultural and conceptual equivalences were analyzed. The questionnaire was completed by 20 patients of the Amaral Carvalho Hospital (Jaú, São Paulo, Brazil) in order to verify the Comprehension Index of each item. Therefore, 27 committee members classified each item into "essential", "useful, but not essential" and "not necessary", in order to calculate the Content Validity Ratio. After, this version of the questionnaire was applied twice to 62 patients of the hospital cited above. The intrarater reliability of the nutritional status analyzed by Patient-Generated Subjective Global Assessment was estimated by Kappa statistics. The Portuguese version of the Patient-Generated Subjective Global Assessment presented 10 incomprehensible expressions. The items "a year ago weight" and "dry mouth symptom" presented the lowest Content Validity Ratio. Substantial intrarater reliability (k = 0.78, p = 0.001) was observed. The cross-cultural adaptation of the Portuguese version of the Patient-Generated Subjective Global Assessment became simple and understandable for Brazilian patients. Thus, this version of the Patient-Generated Subjective Global Assessment was considered a valid and a reliable method.

  5. Reconciling Streamflow Uncertainty Estimation and River Bed Morphology Dynamics. Insights from a Probabilistic Assessment of Streamflow Uncertainties Using a Reliability Diagram

    NASA Astrophysics Data System (ADS)

    Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.

    2014-12-01

    Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long-term river bed morphology processes. This knowledge improves our real-time management of hydrometric stations, given a better caracterisation of erosion/sedimentation processes and the stability of hydrometric station hydraulic control.

  6. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  7. Evaluation of patients with painful total hip arthroplasty using combined single photon emission tomography and conventional computerized tomography (SPECT/CT) - a comparison of semi-quantitative versus 3D volumetric quantitative measurements.

    PubMed

    Barthassat, Emilienne; Afifi, Faik; Konala, Praveen; Rasch, Helmut; Hirschmann, Michael T

    2017-05-08

    It was the primary purpose of our study to evaluate the inter- and intra-observer reliability of a standardized SPECT/CT algorithm for evaluating patients with painful primary total hip arthroplasty (THA). The secondary purpose was a comparison of semi-quantitative and 3D volumetric quantification method for assessment of bone tracer uptake (BTU) in those patients. A novel SPECT/CT localization scheme consisting of 14 femoral and 4 acetabular regions on standardized axial and coronal slices was introduced and evaluated in terms of inter- and intra-observer reliability in 37 consecutive patients with hip pain after THA. BTU for each anatomical region was assessed semi-quantitatively using a color-coded Likert type scale (0-10) and volumetrically quantified using a validated software. Two observers interpreted the SPECT/CT findings in all patients two times with six weeks interval between interpretations in random order. Semi-quantitative and quantitative measurements were compared in terms of reliability. In addition, the values were correlated using Pearson`s correlation. A factorial cluster analysis of BTU was performed to identify clinically relevant regions, which should be grouped and analysed together. The localization scheme showed high inter- and intra-observer reliabilities for all femoral and acetabular regions independent of the measurement method used (semiquantitative versus 3D volumetric quantitative measurements). A high to moderate correlation between both measurement methods was shown for the distal femur, the proximal femur and the acetabular cup. The factorial cluster analysis showed that the anatomical regions might be summarized into three distinct anatomical regions. These were the proximal femur, the distal femur and the acetabular cup region. The SPECT/CT algorithm for assessment of patients with pain after THA is highly reliable independent from the measurement method used. Three clinically relevant anatomical regions (proximal femoral, distal femoral, acetabular) were identified.

  8. Latency causes and reduction in optical metro networks

    NASA Astrophysics Data System (ADS)

    Bobrovs, Vjaceslavs; Spolitis, Sandis; Ivanovs, Girts

    2013-12-01

    The dramatic growth of transmitted information in fiber optical networks is leading to a concern about the network latency for high-speed reliable services like financial transactions, telemedicine, virtual and augmented reality, surveillance, and other applications. In order to ensure effective latency engineering, the delay variability needs to be accurately monitored and measured, in order to control it. This paper in brief describes causes of latency in fiber optical metro networks. Several available latency reduction techniques and solutions are also discussed, namely concerning usage of different chromatic dispersion compensation methods, low-latency amplifiers, optical fibers as well as other network elements.

  9. Iris indexing based on local intensity order pattern

    NASA Astrophysics Data System (ADS)

    Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo

    2017-03-01

    In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.

  10. Non-symmetric forms of non-linear vibrations of flexible cylindrical panels and plates under longitudinal load and additive white noise

    NASA Astrophysics Data System (ADS)

    Krysko, V. A.; Awrejcewicz, J.; Krylova, E. Yu; Papkova, I. V.; Krysko, A. V.

    2018-06-01

    Parametric non-linear vibrations of flexible cylindrical panels subjected to additive white noise are studied. The governing Marguerre equations are investigated using the finite difference method (FDM) of the second-order accuracy and the Runge-Kutta method. The considered mechanical structural member is treated as a system of many/infinite number of degrees of freedom (DoF). The dependence of chaotic vibrations on the number of DoFs is investigated. Reliability of results is guaranteed by comparing the results obtained using two qualitatively different methods to reduce the problem of PDEs (partial differential equations) to ODEs (ordinary differential equations), i.e. the Faedo-Galerkin method in higher approximations and the 4th and 6th order FDM. The Cauchy problem obtained by the FDM is eventually solved using the 4th-order Runge-Kutta methods. The numerical experiment yielded, for a certain set of parameters, the non-symmetric vibration modes/forms with and without white noise. In particular, it has been illustrated and discussed that action of white noise on chaotic vibrations implies quasi-periodicity, whereas the previously non-symmetric vibration modes are closer to symmetric ones.

  11. General motor function assessment scale--reliability of a Norwegian version.

    PubMed

    Langhammer, Birgitta; Lindmark, Birgitta

    2014-01-01

    The General Motor Function assessment scale (GMF) measures activity-related dependence, pain and insecurity among older people in frail health. The aim of the present study was to translate the GMF into a Norwegian version (N-GMF) and establish its reliability and clinical feasibility. The procedure used in translating the GMF was a forward and backward process, testing a convenience sample of 30 frail elderly people with it. The intra-rater reliability tests were performed by three physiotherapists, and the inter-reliability test was done by the same three plus nine independent colleagues. The statistical analyses were performed with a pairwise analysis for intra- and inter-rater reliability, using Cronbach's α, Percentage Agreement (PA), Svensson's rank transformable method and Cohen's κ. The Cronbach's α coefficients for the different subscales of N-GMF were 0.68 for Dependency, 0.73 for Pain and 0.75 for Insecurity. Intra-rater reliability: The variation in the PA for the total score was 40-70% in Dependence, 30-40% in Pain and 30-60% in Insecurity. The Relative Rank Variant (RV) indicated a modest individual bias and an augmented rank-order agreement coefficient ra of 0.96, 0.96 and 0.99, respectively. The variation in the κ statistics was 0.27-0.62 for Dependence, 0.17-0.35 for Pain and 0.13-0.47 for Insecurity. Inter-rater reliability: The PA between different testers in Dependence, Pain and Insecurity was 74%, 89% and 74%, respectively. The augmented rank-order agreement coefficients were: for Dependence r(a) = 0.97; for Pain, r(a) = 0.99; and for Insecurity, r(a) = 0.99. The N-GMF is a fairly reliable instrument for use with frail elderly people, with intra-rater and inter-rater reliability moderate in Dependence and slight to fair in Pain and Insecurity. The clinical usefulness was stressed in regard to its main focus, the frail elderly, and for communication within a multidisciplinary team. Implications for Rehabilitation The Norwegian-General Motor Function Assessment Scale (N-GMF) is a reliable instrument. The N-GMF is an instrument for screening and assessment of activity-related dependence, pain and insecurity in frail older people. The N-GMF may be used as a tool of communication in a multidisciplinary team.

  12. An Evaluation Method of Equipment Reliability Configuration Management

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan

    2018-01-01

    At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.

  13. An improved TV caption image binarization method

    NASA Astrophysics Data System (ADS)

    Jiang, Mengdi; Cheng, Jianghua; Chen, Minghui; Ku, Xishu

    2018-04-01

    TV Video caption image binarization has important influence on semantic video retrieval. An improved binarization method for caption image is proposed in this paper. In order to overcome the shortcomings of ghost and broken strokes problems of traditional Niblack method, the method has considered the global information of the images and the local information of the images. First, Tradition Otsu and Niblack thresholds are used for initial binarization. Second, we introduced the difference between maximum and minimum values in the local window as a third threshold to generate two images. Finally, with a logic AND operation of the two images, great results were obtained. The experiment results prove that the proposed method is reliable and effective.

  14. Oxygen utilization of the human left ventricle - An indirect method for its evaluation and clinical considerations

    NASA Technical Reports Server (NTRS)

    Ghista, D. N.; Sandler, H.

    1974-01-01

    An analytical method is presented for determining the oxygen consumption rate of the intact heart working (as opposed to empty but beating) human left ventricle. Use is made of experimental recordings obtained for the chamber pressure and the associated dimensions of the LV. LV dimensions are determined by cineangiocardiography, and the chamber pressure is obtained by means of fluid-filled catheters during retrograde or transeptal catheterization. An analytical method incorporating these data is then employed for the evaluation of the LV coronary oxygen consumption in five subjects. Oxygen consumption for these subjects was also obtained by the conventional clinical method in order to evaluate the reliability of the proposed method.

  15. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  16. A trough for improved SFG spectroscopy of lipid monolayers.

    PubMed

    Franz, Johannes; van Zadel, Marc-Jan; Weidner, Tobias

    2017-05-01

    Lipid monolayers are indispensable model systems for biological membranes. The main advantage over bilayer model systems is that the surface pressure within the layer can be directly and reliably controlled. The sensitive interplay between surface pressure and temperature determines the molecular order within a model membrane and consequently determines the membrane phase behavior. The lipid phase is of crucial importance for a range of membrane functions such as protein interactions and membrane permeability. A very reliable method to probe the structure of lipid monolayers is sum frequency generation (SFG) vibrational spectroscopy. Not only is SFG extremely surface sensitive but it can also directly access critical parameters such as lipid order and orientation, and it can provide valuable information about protein interactions along with interfacial hydration. However, recent studies have shown that temperature gradients caused by high power laser beams perturb the lipid layers and potentially obscure the spectroscopic results. Here we demonstrate how the local heating problem can be effectively reduced by spatially distributing the laser pulses on the sample surface using a translating Langmuir trough for SFG experiments at lipid monolayers. The efficiency of the trough is illustrated by the detection of enhanced molecular order due to reduced heat load.

  17. A trough for improved SFG spectroscopy of lipid monolayers

    NASA Astrophysics Data System (ADS)

    Franz, Johannes; van Zadel, Marc-Jan; Weidner, Tobias

    2017-05-01

    Lipid monolayers are indispensable model systems for biological membranes. The main advantage over bilayer model systems is that the surface pressure within the layer can be directly and reliably controlled. The sensitive interplay between surface pressure and temperature determines the molecular order within a model membrane and consequently determines the membrane phase behavior. The lipid phase is of crucial importance for a range of membrane functions such as protein interactions and membrane permeability. A very reliable method to probe the structure of lipid monolayers is sum frequency generation (SFG) vibrational spectroscopy. Not only is SFG extremely surface sensitive but it can also directly access critical parameters such as lipid order and orientation, and it can provide valuable information about protein interactions along with interfacial hydration. However, recent studies have shown that temperature gradients caused by high power laser beams perturb the lipid layers and potentially obscure the spectroscopic results. Here we demonstrate how the local heating problem can be effectively reduced by spatially distributing the laser pulses on the sample surface using a translating Langmuir trough for SFG experiments at lipid monolayers. The efficiency of the trough is illustrated by the detection of enhanced molecular order due to reduced heat load.

  18. The construct validity and reliability of the Turkish version of Spreitzer's psychological empowerment scale

    PubMed Central

    2010-01-01

    Background Today, many organizations have adopted some kind of empowerment initiative for at least part of their workforce. Over the last two decades, two complementary perspectives on empowerment at work have emerged: structural and psychological empowerment. Psychological empowerment is a motivational construct manifested in four cognitions: meaning, competence, self-determination and impact. The aim of this article is to examine the construct validity and reliability of the Turkish translation of Spreitzer's psychological empowerment scale in a culturally diverse environment. Methods The scale contains four dimensions over 12 statements. Data were gathered from 260 nurses and 161 physicians. The dimensionality of the scale was evaluated by exploratory factor analyses. To investigate the multidimensional nature of the empowerment construct and the validity of the scale, first- and second-order confirmatory factor analysis was conducted. Furthermore, Cronbach alpha coefficients were assessed to investigate reliability. Results Exploratory factor analyses revealed that four factors in both solutions. The first- and second-order factor analysis indicated an acceptable fit between the data and the theoretical model for nurses and physicians. Cronbach alpha coefficients varied between 0.81-0.94 for both groups, which may be considered satisfactory. Conclusions The analyses indicated that the psychometric properties of the Turkish version of the scale can be considered satisfactory. PMID:20214770

  19. Residual stress measurement in a metal microdevice by micro Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi

    2017-10-01

    Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β-SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β-SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni-SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t-test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size.

  20. 77 FR 65873 - North American Electric Reliability Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ... Electric Reliability Corporation; Notice of Filing Take notice that on October 12, 2012, the North American Electric Reliability Corporation (NERC) submitted a compliance filing in accordance with the Federal Energy Regulatory Commission's Order (FERC or Commission) in North American Electric Reliability Corporation, 140...

  1. 77 FR 32629 - North American Electric Reliability Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... Electric Reliability Corporation; Notice of Filing Take notice that on May 14, 2012, the North American Electric Reliability Corporation (NERC) submitted a compliance filing in accordance with the Federal Energy Regulatory Commission's Order (FERC or Commission) in North American Electric Reliability Corporation, 138...

  2. 78 FR 18333 - North American Electric Reliability Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... Electric Reliability Corporation; Notice of Filing Take notice that on March 15, 2013, the North American Electric Reliability Corporation (NERC) submitted a compliance filing and report in accordance with the Federal Energy Regulatory Commission's Order (FERC or Commission) in North American Electric Reliability...

  3. 78 FR 9687 - North American Electric Reliability Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-11

    ... Electric Reliability Corporation; Notice of Filing Take notice that on February 1, 2013, the North American Electric Reliability Corporation (NERC) submitted a compliance filing in accordance with the Federal Energy Regulatory Commission's Order (FERC or Commission) in North American Electric Reliability Corporation, 141...

  4. 18 CFR 39.6 - Conflict of a Reliability Standard with a Commission Order.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT RULES CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.6 Conflict of a Reliability Standard with...

  5. Chemotaxonomic identification of single bacteria by micro-Raman spectroscopy: application to clean-room-relevant biological contaminations.

    PubMed

    Rösch, Petra; Harz, Michaela; Schmitt, Michael; Peschke, Klaus-Dieter; Ronneberger, Olaf; Burkhardt, Hans; Motzkus, Hans-Walter; Lankers, Markus; Hofer, Stefan; Thiele, Hans; Popp, Jürgen

    2005-03-01

    Microorganisms, such as bacteria, which might be present as contamination inside an industrial food or pharmaceutical clean room process need to be identified on short time scales in order to minimize possible health hazards as well as production downtimes causing financial deficits. Here we describe the first results of single-particle micro-Raman measurements in combination with a classification method, the so-called support vector machine technique, allowing for a fast, reliable, and nondestructive online identification method for single bacteria.

  6. Chemotaxonomic Identification of Single Bacteria by Micro-Raman Spectroscopy: Application to Clean-Room-Relevant Biological Contaminations

    PubMed Central

    Rösch, Petra; Harz, Michaela; Schmitt, Michael; Peschke, Klaus-Dieter; Ronneberger, Olaf; Burkhardt, Hans; Motzkus, Hans-Walter; Lankers, Markus; Hofer, Stefan; Thiele, Hans; Popp, Jürgen

    2005-01-01

    Microorganisms, such as bacteria, which might be present as contamination inside an industrial food or pharmaceutical clean room process need to be identified on short time scales in order to minimize possible health hazards as well as production downtimes causing financial deficits. Here we describe the first results of single-particle micro-Raman measurements in combination with a classification method, the so-called support vector machine technique, allowing for a fast, reliable, and nondestructive online identification method for single bacteria. PMID:15746368

  7. Adaptive Sampling using Support Vector Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Mandelli; C. Smith

    2012-11-01

    Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.

  8. A new computational method for reacting hypersonic flows

    NASA Astrophysics Data System (ADS)

    Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.

    2017-07-01

    Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.

  9. Reliability of Chinese medicine diagnostic variables in the examination of patients with osteoarthritis of the knee.

    PubMed

    Hua, Bin; Abbas, Estelle; Hayes, Alan; Ryan, Peter; Nelson, Lisa; O'Brien, Kylie

    2012-11-01

    Chinese medicine (CM) has its own diagnostic indicators that are used as evidence of change in a patient's condition. The majority of studies investigating efficacy of Chinese herbal medicine (CHM) have utilized biomedical diagnostic endpoints. For CM clinical diagnostic variables to be incorporated into clinical trial designs, there would need to be evidence that these diagnostic variables are reliable. Previous studies have indicated that the reliability of CM syndrome diagnosis is variable. Little information is known about where the variability stems from--the basic data collection level or the synthesis of diagnostic data, or both. No previous studies have investigated systematically the reliability of all four diagnostic methods used in the CM diagnostic process (Inquiry, Inspection, Auscultation/Olfaction, and Palpation). The objective of this study was to assess the inter-rater reliability of data collected using the four diagnostic methods of CM in Australian patients with knee osteoarthritis (OA), in order to investigate if CM variables could be used with confidence as diagnostic endpoints in a clinical trial investigating the efficacy of a CHM in treating OA. An inter-rater reliability study was conducted as a substudy of a clinical trial investigating the treatment of knee OA with Chinese herbal medicine. Two (2) experienced CM practitioners conducted a CM examination separately, within 2 hours of each other, in 40 participants. A CM assessment form was utilized to record the diagnostic data. Cohen's κ coefficient was used as a measure of the level of agreement between 2 practitioners. There was a relatively good level of agreement for Inquiry and Auscultation variables, and, in general, a low level of agreement for (visual) Inspection and Palpation variables. There was variation in the level of agreement between 2 practitioners on clinical information collected using the Four Diagnostic Methods of a CM examination. Some aspects of CM diagnosis appear to be reliable, while others are not. Based on these results, it was inappropriate to use CM diagnostic variables as diagnostic endpoints in the main study, which was an investigation of efficacy of CHM treatment of knee OA.

  10. Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method

    PubMed Central

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.

    2007-01-01

    Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281

  11. Performance of blind source separation algorithms for fMRI analysis using a group ICA method.

    PubMed

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D

    2007-06-01

    Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.

  12. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ying; Zhou, Zhi; Botterud, Audun

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less

  13. [Comprehensive weighted recognition method for hydrological abrupt change: With the runoff series of Jiajiu hydrological station in Lancang River as an example].

    PubMed

    Gu, Hai Ting; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Abrupt change is an important manifestation of hydrological process with dramatic variation in the context of global climate change, the accurate recognition of which has great significance to understand hydrological process changes and carry out the actual hydrological and water resources works. The traditional method is not reliable at both ends of the samples. The results of the methods are often inconsistent. In order to solve the problem, we proposed a comprehensive weighted recognition method for hydrological abrupt change based on weighting by comparing of 12 commonly used methods for testing change points. The reliability of the method was verified by Monte Carlo statistical test. The results showed that the efficiency of the 12 methods was influenced by the factors including coefficient of variation (Cv), deviation coefficient (Cs) before the change point, mean value difference coefficient, Cv difference coefficient and Cs difference coefficient, but with no significant relationship with the mean value of the sequence. Based on the performance of each method, the weight of each test method was given following the results from statistical test. The sliding rank sum test method and the sliding run test method had the highest weight, whereas the RS test method had the lowest weight. By this means, the change points with the largest comprehensive weight could be selected as the final result when the results of the different methods were inconsistent. This method was used to analyze the daily maximum sequence of Jiajiu station in the lower reaches of the Lancang River (1-day, 3-day, 5-day, 7-day and 1-month). The results showed that each sequence had obvious jump variation in 2004, which was in agreement with the physical causes of hydrological process change and water conservancy construction. The rationality and reliability of the proposed method was verified.

  14. 77 FR 53884 - Automatic Underfrequency Load Shedding and Load Shedding Plans Reliability Standards; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-04

    ... Underfrequency Load Shedding and Load Shedding Plans Reliability Standards; Notice of Compliance Filing Take notice that on August 9, 2012, North American Electric Reliability Corporation submitted a compliance... Load Shedding Plans Reliability Standards, 139 FERC ] 61,098, (Order No. 763) (2012). Any person...

  15. 76 FR 28014 - Commission Information Collection Activities (FERC-600); Comment Request; Extension

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... complaint, may order the Electric Reliability Organization to submit to the Commission a proposed reliability standard or a modification to a reliability standard that addresses a specific matter if the Commission considers such a new or modified reliability standard appropriate to carry out this section...

  16. Quantitation of Permethylated N-Glycans through Multiple-Reaction Monitoring (MRM) LC-MS/MS

    PubMed Central

    Zhou, Shiyue; Hu, Yunli; DeSantos-Garcia, Janie L.; Mechref, Yehia

    2015-01-01

    The important biological roles of glycans and their implications in disease development and progression have created a demand for the development of sensitive quantitative glycomics methods. Quantitation of glycans existing at low abundance is still analytically challenging. In this study, an N-linked glycans quantitation method using multiple reaction monitoring (MRM) on a triple quadrupole instrument was developed. Optimum normalized collision energy (CE) for both sialylated and fucosylated N-glycan structures was determined to be 30% while it was found to be 35% for either fucosylated or sialylated structures The optimum CE for mannose and complex type N-glycan structures was determined to be 35%. Additionally, the use of three transitions was shown to facilitate reliable quantitation. A total of 88 N-glycan structures in human blood serum were quantified using this MRM approach. Reliable detection and quantitation of these structures was achieved when the equivalence of 0.005 μL of blood serum was analyzed. Accordingly, N-glycans down to the 100th of a μL level can be reliably quantified in pooled human blood serum, spanning a dynamic concentration range of three orders of magnitudes. MRM was also effectively utilized to quantitatively compare the expression of N-glycans derived from brain-targeting breast carcinoma cells (MDA-MB-231BR) and metastatic breast cancer cells (MDA-MB-231). Thus, the described MRM method of permethylated N-glycan structures enables a rapid and reliable identification and quantitation of glycans derived from glycoproteins purified or present in complex biological samples. PMID:25698222

  17. Reliability of solid-state lighting electrical drivers subjected to WHTOL accelerated aging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Sakalauku, Peter; Davis, Lynn

    An investigation of a solid-state lighting (SSL) luminaire with the focus on the electronic driver which has been exposed to a standard wet hot temperature operating life (WHTOL) of 85% RH and 85°C in order to assess reliability of prolonged exposer to a harsh environment has been conducted. SSL luminaires are beginning introduced as head lamps in some of today's luxury automobiles and may also be fulfilling a variety of important outdoor applications such as overhead street lamps, traffic signals and landscape lighting. SSL luminaires in these environments are almost certain to encounter excessive moisture from humidity and high temperaturesmore » for a persistent period of time. The lack of accelerated test methods for LEDs to assess long-term reliability prior to introduction into the marketplace, a need for SSL physics based PHM modeling indicators for assessment and prediction of LED life, as well as the U.S. Department of Energy's R&D roadmap to replace todays lighting with SSL luminaires makes it important to increase the understanding of the reliability of SSL devices, specifically, in harsh environment applications. In this work, a set of SSL electrical drivers were investigated to determine failure mechanisms that occur during prolonged harsh environment applications. Each driver consists of four aluminum electrolytic capacitors (AECs) of three different types and was considered the weakest component inside the SSL electrical driver. The reliability of the electrical driver was assessed by monitoring the change in capacitance and the change in equivalent series resistance for each AEC, as well as monitoring the luminous flux of the SSL luminaire or the output of the electrical driver. The luminous flux of a pristine SSL electrical driver was also monitored in order to detect minute changes in the electrical drivers output and to aid in the investigation of the SSL luminaires reliability. The failure mechanisms of the electrical drivers have been determined and are presented in this paper.« less

  18. Instruments to Explore Blended Learning: Modifying a Method to Analyse Online Communication for the Analysis of Face-to-Face Communication

    ERIC Educational Resources Information Center

    de Leng, Bas A.; Dolmans, Diana H. J. M.; Donkers, H. H. L. M.; Muijtjens, Arno M. M.; van der Vleuten, Cees P. M.

    2010-01-01

    In the complex practice of today's blended learning, educators need to be able to evaluate both online and face-to-face communication in order to get the full picture of what is going on in blended learning scenarios. The aim of this study was to investigate the reliability and feasibility of a practical instrument for analysing face-to-face…

  19. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  20. Making literature reviews more reliable through application of lessons from systematic reviews.

    PubMed

    Haddaway, N R; Woodcock, P; Macura, B; Collins, A

    2015-12-01

    Review articles can provide valuable summaries of the ever-increasing volume of primary research in conservation biology. Where findings may influence important resource-allocation decisions in policy or practice, there is a need for a high degree of reliability when reviewing evidence. However, traditional literature reviews are susceptible to a number of biases during the identification, selection, and synthesis of included studies (e.g., publication bias, selection bias, and vote counting). Systematic reviews, pioneered in medicine and translated into conservation in 2006, address these issues through a strict methodology that aims to maximize transparency, objectivity, and repeatability. Systematic reviews will always be the gold standard for reliable synthesis of evidence. However, traditional literature reviews remain popular and will continue to be valuable where systematic reviews are not feasible. Where traditional reviews are used, lessons can be taken from systematic reviews and applied to traditional reviews in order to increase their reliability. Certain key aspects of systematic review methods that can be used in a context-specific manner in traditional reviews include focusing on mitigating bias; increasing transparency, consistency, and objectivity, and critically appraising the evidence and avoiding vote counting. In situations where conducting a full systematic review is not feasible, the proposed approach to reviewing evidence in a more systematic way can substantially improve the reliability of review findings, providing a time- and resource-efficient means of maximizing the value of traditional reviews. These methods are aimed particularly at those conducting literature reviews where systematic review is not feasible, for example, for graduate students, single reviewers, or small organizations. © 2015 Society for Conservation Biology.

  1. Fabrication of nano-scale Cu bond pads with seal design in 3D integration applications.

    PubMed

    Chen, K N; Tsang, C K; Wu, W W; Lee, S H; Lu, J Q

    2011-04-01

    A method to fabricate nano-scale Cu bond pads for improving bonding quality in 3D integration applications is reported. The effect of Cu bonding quality on inter-level via structural reliability for 3D integration applications is investigated. We developed a Cu nano-scale-height bond pad structure and fabrication process for improved bonding quality by recessing oxides using a combination of SiO2 CMP process and dilute HF wet etching. In addition, in order to achieve improved wafer-level bonding, we introduced a seal design concept that prevents corrosion and provides extra mechanical support. Demonstrations of these concepts and processes provide the feasibility of reliable nano-scale 3D integration applications.

  2. Utility of the Rosenberg self-esteem scale.

    PubMed

    Davis, Clare; Kellett, Stephen; Beail, Nigel

    2009-05-01

    The Rosenberg Self-Esteem Scale (RSES) continues to be used to purportedly measure self-esteem of people with intellectual disabilities, despite the lack of sound evidence concerning its validity and reliability when employed with this population. The psychometric foundations of the RSES were analyzed here with a sample of 219 participants with intellectual disabilities. The factor analytic methods employed revealed two factors (Self-Worth and Self-Criticism) and more specific problems with RSES Items 5 and 8. Overall, this scale showed only moderate temporal and moderate internal reliability and poor aspects of criterion validity. Results are discussed with reference to either developing a new measure of self-esteem or redesigning and simplifying the RSES in order to increase its initial face validity in intellectual disability samples.

  3. Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong

    2016-12-01

    Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.

  4. Advances in high-resolution mass spectrometry based on metabolomics studies for food--a review.

    PubMed

    Rubert, Josep; Zachariasova, Milena; Hajslova, Jana

    2015-01-01

    Food authenticity becomes a necessity for global food policies, since food placed in the market without fail has to be authentic. It has always been a challenge, since in the past minor components, called also markers, have been mainly monitored by chromatographic methods in order to authenticate the food. Nevertheless, nowadays, advanced analytical methods have allowed food fingerprints to be achieved. At the same time they have been also combined with chemometrics, which uses statistical methods in order to verify food and to provide maximum information by analysing chemical data. These sophisticated methods based on different separation techniques or stand alone have been recently coupled to high-resolution mass spectrometry (HRMS) in order to verify the authenticity of food. The new generation of HRMS detectors have experienced significant advances in resolving power, sensitivity, robustness, extended dynamic range, easier mass calibration and tandem mass capabilities, making HRMS more attractive and useful to the food metabolomics community, therefore becoming a reliable tool for food authenticity. The purpose of this review is to summarise and describe the most recent metabolomics approaches in the area of food metabolomics, and to discuss the strengths and drawbacks of the HRMS analytical platforms combined with chemometrics.

  5. New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |

    NASA Astrophysics Data System (ADS)

    Dai, Wenting; Lin, Ling; Li, Gang

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  6. New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |

    NASA Astrophysics Data System (ADS)

    Wenting, Dai; Ling, Lin; Gang, Li

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  7. Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hilburger, Mark W.

    2003-01-01

    A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.

  8. Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Chistopher A.; Carpenter, Mark H.; Lewis, R. Michael

    1999-01-01

    The derivation of storage explicit Runge-Kutta (ERK) schemes has been performed in the context of integrating the compressible Navier-Stokes equations via direct numerical simulation. Optimization of ERK methods is done across the broad range of properties, such as stability and accuracy efficiency, linear and nonlinear stability, error control reliability, step change stability, and dissipation/dispersion accuracy, subject to varying degrees of memory economization. Following van der Houwen and Wray, 16 ERK pairs are presented using from two to five registers of memory per equation, per grid point and having accuracies from third- to fifth-order. Methods have been assessed using the differential equation testing code DETEST, and with the 1D wave equation. Two of the methods have been applied to the DNS of a compressible jet as well as methane-air and hydrogen-air flames. Derived 3(2) and 4(3) pairs are competitive with existing full-storage methods. Although a substantial efficiency penalty accompanies use of two- and three-register, fifth-order methods, the best contemporary full-storage methods can be pearl), matched while still saving two to three registers of memory.

  9. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  10. 76 FR 45237 - Commission Information Collection Activities (FERC-600); Comment Request; Submitted for OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-28

    ... own motion or upon complaint, may order the Electric Reliability Organization to submit to the Commission a proposed reliability standard or a modification to a reliability standard that addresses a specific matter if the Commission considers such a new or modified reliability standard appropriate to...

  11. Test-retest reliability and comparability of paper and computer questionnaires for the Finnish version of the Tampa Scale of Kinesiophobia.

    PubMed

    Koho, P; Aho, S; Kautiainen, H; Pohjolainen, T; Hurri, H

    2014-12-01

    To estimate the internal consistency, test-retest reliability and comparability of paper and computer versions of the Finnish version of the Tampa Scale of Kinesiophobia (TSK-FIN) among patients with chronic pain. In addition, patients' personal experiences of completing both versions of the TSK-FIN and preferences between these two methods of data collection were studied. Test-retest reliability study. Paper and computer versions of the TSK-FIN were completed twice on two consecutive days. The sample comprised 94 consecutive patients with chronic musculoskeletal pain participating in a pain management or individual rehabilitation programme. The group rehabilitation design consisted of physical and functional exercises, evaluation of the social situation, psychological assessment of pain-related stress factors, and personal pain management training in order to regain overall function and mitigate the inconvenience of pain and fear-avoidance behaviour. The mean TSK-FIN score was 37.1 [standard deviation (SD) 8.1] for the computer version and 35.3 (SD 7.9) for the paper version. The mean difference between the two versions was 1.9 (95% confidence interval 0.8 to 2.9). Test-retest reliability was 0.89 for the paper version and 0.88 for the computer version. Internal consistency was considered to be good for both versions. The intraclass correlation coefficient for comparability was 0.77 (95% confidence interval 0.66 to 0.85), indicating substantial reliability between the two methods. Both versions of the TSK-FIN demonstrated substantial intertest reliability, good test-retest reliability, good internal consistency and acceptable limits of agreement, suggesting their suitability for clinical use. However, subjects tended to score higher when using the computer version. As such, in an ideal situation, data should be collected in a similar manner throughout the course of rehabilitation or clinical research. Copyright © 2014 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  12. Development and psychometric evaluation of the Premarital Sexual Behavior Assessment Scale for Young Women (PSAS-YW): an exploratory mixed method study.

    PubMed

    Rahmani, Azam; Merghati-Khoei, Effat; Moghadam-Banaem, Lida; Hajizadeh, Ebrahim; Hamdieh, Mostafa; Montazeri, Ali

    2014-06-13

    Premarital sexual behaviors are important issue for women's health. The present study was designed to develop and examine the psychometric properties of a scale in order to identify young women who are at greater risk of premarital sexual behavior. This was an exploratory mixed method investigation. Indeed, the study was conducted in two phases. In the first phase, qualitative methods (focus group discussion and individual interview) were applied to generate items and develop the questionnaire. In the second phase, psychometric properties (validity and reliability) of the questionnaire were assessed. In the first phase an item pool containing 53 statements related to premarital sexual behavior was generated. In the second phase item reduction was applied and the final version of the questionnaire containing 26 items was developed. The psychometric properties of this final version were assessed and the results showed that the instrument has a good structure, and reliability. The results from exploratory factory analysis indicated a 5-factor solution for the instrument that jointly accounted for the 57.4% of variance observed. The Cronbach's alpha coefficient for the instrument was found to be 0.87. This study provided a valid and reliable scale to identify premarital sexual behavior in young women. Assessment of premarital sexual behavior might help to improve women's sexual abstinence.

  13. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  14. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  15. Reliability and Validity of Modified Service Quality Instrument (SERVQUAL) in Patients' Motivation to Adhere to Insulin Therapy.

    PubMed

    Jakupovic, Vedran; Solakovic, Suajb; Celebic, Nedim; Kulovic, Dzenan

    2018-03-01

    Diabetes is progressive condition which requires various ways of treatment. Adequate therapy prescribed in the right time helps patient to postpone development of complications. Adherence to complicated therapy is challenge for both patients and HCPs and is subject of research in many disciplines. Improvement in communication between HCP and patients is very important in patient's adherence to therapy. Aim of this research was to explore validity and reliability of modified SERVQUAL instrument in attempt to explore ways of motivating diabetic patient to accept prescribed insulin therapy. We used modified SERVQUAL questionnaire as instrument in the research. It was necessary to check validity and reliability of the new modified instrument. Results show that modified Servqual instrument has excellent reliability (α=0.908), so we could say that it measures precisely Expectations, Perceptions and Motivation at patients. Factor analysis (EFA method) with Varimax rotation extracted 4 factors which together explain 52.902% variance of the results on this subscale. Bifactorial solution could be seen on Scree-plot diagram (break at second factor). Results in this research show that modified Servqual instrument which is created in order to measure expectations and perceptions of the patients is valid and reliable. Reliability and validity are proven indeed in additional dimension which was created originally for this research - motivation to accept insulin therapy.

  16. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  17. High-performance and high-reliability SOT-6 packaged diplexer based on advanced IPD fabrication techniques

    NASA Astrophysics Data System (ADS)

    Qiang, Tian; Wang, Cong; Kim, Nam-Young

    2017-08-01

    A diplexer offering the advantages of compact size, high performance, and high reliability is proposed on the basis of advanced integrated passive device (IPD) fabrication techniques. The proposed diplexer is developed by combining a third-order low-pass filter (LPF) and a third-order high-pass filter (HPF), which are designed on the basis of the elliptic function prototype low-pass filter. Primary components, such as inductors and capacitors, are designed and fabricated with high Q-factor and appropriate values, and they are subsequently used to construct a compact diplexer having a chip area of 900 μm × 1100 μm (0.009 λ0 × 0.011 λ0, where λ0 is the guided wavelength). In addition, a small-outline transistor (SOT-6) packaging method is adopted, and reliability tests (including temperature, humidity, vibration, and pressure) are conducted to guarantee long-term stability and commercial success. The packaged measurement results indicate excellent RF performance with insertion losses of 1.39 dB and 0.75 dB at operation bands of 0.9 GHz and 1.8 GHz, respectively. The return loss is lower than 10 dB from 0.5 GHz to 4.0 GHz, while the isolation is higher than 15 dB from 0.5 GHz to 3.0 GHz. Thus, it can be concluded that the proposed SOT-6 packaged diplexer is a promising candidate for GSM/CDMA applications. Synthetic solution of diplexer design, RF performance optimization, fabrication process, packaging, RF response measurement, and reliability test is particularly explained and analyzed in this work.

  18. A neuro approach to solve fuzzy Riccati differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Kumaresan, N.; Kamali, M. Z. M.; Ratnavelu, Kurunathan

    2015-10-01

    There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.

  19. Caffeine's Influence on Nicotine's Effects in Nonsmokers

    PubMed Central

    Blank, Melissa D.; Kleykamp, Bethea A.; Jennings, Janine M.; Eissenberg, Thomas

    2011-01-01

    Objective To determine if nicotine's effects are influenced by caffeine in nonsmoking, moderate-caffeine consuming individuals (N=20). Methods The first 3 sessions included one of 3 randomly ordered, double-blind caffeine doses (0, 75, or 150 mg, oral [po]) and 2 single-blind nicotine gum doses (2 and 4 mg) in ascending order. The fourth session (single blind) repeated the 0 mg caffeine condition. Results Nicotine increased heart rate and subjective ratings indicative of aversive effects, and decreased reaction times. These effects were independent of caffeine dose and reliable across sessions. Conclusions In nonsmokers, nicotine effects are not influenced by moderate caffeine doses. PMID:17555378

  20. A neuro approach to solve fuzzy Riccati differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahrir, Mohammad Shazri, E-mail: mshazri@gmail.com; Telekom Malaysia, R&D TM Innovation Centre, LingkaranTeknokrat Timur, 63000 Cyberjaya, Selangor; Kumaresan, N., E-mail: drnk2008@gmail.com

    There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.

  1. Method and system of Jones-matrix mapping of blood plasma films with "fuzzy" analysis in differentiation of breast pathology changes

    NASA Astrophysics Data System (ADS)

    Zabolotna, Natalia I.; Radchenko, Kostiantyn O.; Karas, Oleksandr V.

    2018-01-01

    A fibroadenoma diagnosing of breast using statistical analysis (determination and analysis of statistical moments of the 1st-4th order) of the obtained polarization images of Jones matrix imaginary elements of the optically thin (attenuation coefficient τ <= 0,1 ) blood plasma films with further intellectual differentiation based on the method of "fuzzy" logic and discriminant analysis were proposed. The accuracy of the intellectual differentiation of blood plasma samples to the "norm" and "fibroadenoma" of breast was 82.7% by the method of linear discriminant analysis, and by the "fuzzy" logic method is 95.3%. The obtained results allow to confirm the potentially high level of reliability of the method of differentiation by "fuzzy" analysis.

  2. Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors

    NASA Astrophysics Data System (ADS)

    Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.

    2014-07-01

    The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.

  3. MEASURING SPORT-SPECIFIC PHYSICAL ABILITIES IN MALE GYMNASTS: THE MEN'S GYMNASTICS FUNCTIONAL MEASUREMENT TOOL

    PubMed Central

    Kenyon, Lisa K.; Elliott, James M; Cheng, M. Samuel

    2016-01-01

    Purpose/Background Despite the availability of various field-tests for many competitive sports, a reliable and valid test specifically developed for use in men's gymnastics has not yet been developed. The Men's Gymnastics Functional Measurement Tool (MGFMT) was designed to assess sport-specific physical abilities in male competitive gymnasts. The purpose of this study was to develop the MGFMT by establishing a scoring system for individual test items and to initiate the process of establishing test-retest reliability and construct validity. Methods A total of 83 competitive male gymnasts ages 7-18 underwent testing using the MGFMT. Thirty of these subjects underwent re-testing one week later in order to assess test-retest reliability. Construct validity was assessed using a simple regression analysis between total MGFMT scores and the gymnasts’ USA-Gymnastics competitive level to calculate the coefficient of determination (r2). Test-retest reliability was analyzed using Model 1 Intraclass correlation coefficients (ICC). Statistical significance was set at the p<0.05 level. Results The relationship between total MGFMT scores and subjects’ current USA-Gymnastics competitive level was found to be good (r2 = 0.63). Reliability testing of the MGFMT composite test score showed excellent test-retest reliability over a one-week period (ICC = 0.97). Test-retest reliability of the individual component tests ranged from good to excellent (ICC = 0.75-0.97). Conclusions The results of this study provide initial support for the construct validity and test-retest reliability of the MGFMT. Level of Evidence Level 3 PMID:27999723

  4. 77 FR 4031 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... notice that the Commission received the following electric reliability filings: Docket Numbers: RD11-3-000. Applicants: North American Electric Reliability Corporation. Description: Compliance Filing of the North American Electric Reliability Corporation in Response to November 17, 2011 Order Approving...

  5. Modulation stability and optical soliton solutions of nonlinear Schrödinger equation with higher order dispersion and nonlinear terms and its applications

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad; Seadawy, Aly R.; Lu, Dianchen

    2017-12-01

    In optical fibers, the higher order non-linear Schrödinger equation (NLSE) with cubic quintic nonlinearity describes the propagation of extremely short pulses. We constructed bright and dark solitons, solitary wave and periodic solitary wave solutions of generalized higher order NLSE in cubic quintic non Kerr medium by applying proposed modified extended mapping method. These obtained solutions have key applications in physics and mathematics. Moreover, we have also presented the formation conditions on solitary wave parameters in which dark and bright solitons can exist for this media. We also gave graphically the movement of constructed solitary wave and soliton solutions, that helps to realize the physical phenomena's of this model. The stability of the model in normal dispersion and anomalous regime is discussed by using the modulation instability analysis, which confirms that all constructed solutions are exact and stable. Many other such types of models arising in applied sciences can also be solved by this reliable, powerful and effective method.

  6. Test-Retest Reliability of “High-Order” Functional Connectivity in Young Healthy Adults

    PubMed Central

    Zhang, Han; Chen, Xiaobo; Zhang, Yu; Shen, Dinggang

    2017-01-01

    Functional connectivity (FC) has become a leading method for resting-state functional magnetic resonance imaging (rs-fMRI) analysis. However, the majority of the previous studies utilized pairwise, temporal synchronization-based FC. Recently, high-order FC (HOFC) methods were proposed with the idea of computing “correlation of correlations” to capture high-level, more complex associations among the brain regions. There are two types of HOFC. The first type is topographical profile similarity-based HOFC (tHOFC) and its variant, associated HOFC (aHOFC), for capturing different levels of HOFC. Instead of measuring the similarity of the original rs-fMRI signals with the traditional FC (low-order FC, or LOFC), tHOFC measures the similarity of LOFC profiles (i.e., a set of LOFC values between a region and all other regions) between each pair of brain regions. The second type is dynamics-based HOFC (dHOFC) which defines the quadruple relationship among every four brain regions by first calculating two pairwise dynamic LOFC “time series” and then measuring their temporal synchronization (i.e., temporal correlation of the LOFC fluctuations, not the BOLD fluctuations). Applications have shown the superiority of HOFC in both disease biomarker detection and individualized diagnosis than LOFC. However, no study has been carried out for the assessment of test-retest reliability of different HOFC metrics. In this paper, we systematically evaluate the reliability of the two types of HOFC methods using test-retest rs-fMRI data from 25 (12 females, age 24.48 ± 2.55 years) young healthy adults with seven repeated scans (with interval = 3–8 days). We found that all HOFC metrics have satisfactory reliability, specifically (1) fair-to-good for tHOFC and aHOFC, and (2) fair-to-moderate for dHOFC with relatively strong connectivity strength. We further give an in-depth analysis of the biological meanings of each HOFC metric and highlight their differences compared to the LOFC from the aspects of cross-level information exchanges, within-/between-network connectivity, and modulatory connectivity. In addition, how the dynamic analysis parameter (i.e., sliding window length) affects dHOFC reliability is also investigated. Our study reveals unique functional associations characterized by the HOFC metrics. Guidance and recommendations for future applications and clinical research using HOFC are provided. This study has made a further step toward unveiling more complex human brain connectome. PMID:28824362

  7. The reliability of the Extra Load Index as a measure of relative load carriage economy.

    PubMed

    Hudson, Sean; Cooke, Carlton; Lloyd, Ray

    2017-09-01

    The aim of this study was to measure the reliability of the extra load index (ELI) as a method for assessing relative load carriage economy. Seventeen volunteers (12 males, 5 females) performed walking trials at 3 km·h -1 , 6 km·h -1 and a self-selected speed. Trial conditions were repeated 7 days later to assess test-retest reliability. Trials involved four 4-minute periods of walking, each separated by 5 min of rest. The initial stage was performed unloaded followed in a randomised order by a second unloaded period and walking with backpacks of 7 and 20 kg. Results show ELI values did not differ significantly between trials for any of the speeds (p = 0.46) with either of the additional loads (p = 0.297). The systematic bias, limits of agreement and coefficients of variation were small in all trial conditions. We conclude the ELI appears to be a reliable measure of relative load carriage economy. Practitioner Summary: This paper demonstrates that the ELI is a reliable measure of load carriage economy at a range of walking speeds with both a light and heavy load. The ELI, therefore, represents a useful tool for comparing the relative economy associated with different load carriage systems.

  8. Reliability and validity of the closed kinetic chain upper extremity stability test.

    PubMed

    Lee, Dong-Rour; Kim, Laurentius Jongsoon

    2015-04-01

    [Purpose] The purpose of this study was to examine the reliability and validity of the Closed Kinetic Chain Upper Extremity Stability (CKCUES) test. [Subjects and Methods] A sample of 40 subjects (20 males, 20 females) with and without pain in the upper limbs was recruited. The subjects were tested twice, three days apart to assess the reliability of the CKCUES test. The CKCUES test was performed four times, and the average was calculated using the data of the last 3 tests. In order to test the validity of the CKCUES test, peak torque of internal/external shoulder rotation was measured using an isokinetic dynamometer, and maximum grip strength was measured using a hand dynamometer, and their Pearson correlation coefficients with the average values of the CKCUES test were calculated. [Results] The reliability of the CKCUES test was very high (ICC=0.97). The correlations between the CKCUES test and maximum grip strength (r=0.78-0.79), and the peak torque of internal/external shoulder rotation (r=0.87-0.94) were high indicating its validity. [Conclusion] The reliability and validity of the CKCUES test were high. The CKCUES test is expected to be used for clinical tests on upper limb stability at low price.

  9. A radio-aware routing algorithm for reliable directed diffusion in lossy wireless sensor networks.

    PubMed

    Kim, Yong-Pyo; Jung, Euihyun; Park, Yong-Jin

    2009-01-01

    In Wireless Sensor Networks (WSNs), transmission errors occur frequently due to node failure, battery discharge, contention or interference by objects. Although Directed Diffusion has been considered as a prominent data-centric routing algorithm, it has some weaknesses due to unexpected network errors. In order to address these problems, we proposed a radio-aware routing algorithm to improve the reliability of Directed Diffusion in lossy WSNs. The proposed algorithm is aware of the network status based on the radio information from MAC and PHY layers using a cross-layer design. The cross-layer design can be used to get detailed information about current status of wireless network such as a link quality or transmission errors of communication links. The radio information indicating variant network conditions and link quality was used to determine an alternative route that provides reliable data transmission under lossy WSNs. According to the simulation result, the radio-aware reliable routing algorithm showed better performance in both grid and random topologies with various error rates. The proposed solution suggested the possibility of providing a reliable transmission method for QoS requests in lossy WSNs based on the radio-awareness. The energy and mobility issues will be addressed in the future work.

  10. A Primer for DoD Reliability, Maintainability, Safety, and Logistics Standards, 1992

    DTIC Science & Technology

    1991-10-01

    equipment, identified in the order of application (i.e., assembly, unit and system screens). Screening Strength ( SS ) - The probability that a screen...As’- ss equipment suitability for its intended operational environment. c. Verify contractual compliance. Each test method is divided into two...SIMULA-ITno Q.A. j FABRICATION, VALIDATION AD CONCERNS j PACKAGE AND SAnSFACTORY MBI Y QUALIMCATION RE UIREMENTS NO QRA TESTS YES QMLFOR TECHNOLOGY

  11. Numerical investigation of optimal layout of rockbolts for ground structures

    NASA Astrophysics Data System (ADS)

    Kato, Junji; Ishi, Keiichiro; Terada, Kenjiro; Kyoya, Takashi

    Due to difficulty to obtain reliable ground data, layout of rockbolts is determined entirely in a classical way assuming an isotropic rock stress condition. The present study assumes anisotropic stress condition and optimizes layout of rockbolts in order to maximize the stiffness of unstable ground of tunnels and slopes by applying multiphase layout optimization. It was verified that this method has a certain possibility to improve the stiffness of unstable ground.

  12. Simultaneous measurement of chlorophyll and astaxanthin in Haematococcus pluvialis cells by first-order derivative ultraviolet-visible spectrophotometry.

    PubMed

    Lababpour, Abdolmajid; Lee, Choul-Gyun

    2006-02-01

    A first-order derivative spectrophotometric method has been developed for the simultaneous measurement of chlorophyll and astaxanthin concentrations in Haematococcus pluvialis cells. Acetone was selected for the extraction of pigments because of its good sensitivity and low toxicity compared with other organic solvents tested; the tested solvents included acetone, methanol, hexane, chloroform, n-propanol, and acetonitrile. A first-order derivative spectrophotometric method was used to eliminate the effects of the overlaping of the chlorophyll and astaxanthin peaks. The linear ranges in 1D evaluation were from 0.50 to 20.0 microg x ml(-1) for chlorophyll and from 1.00 to 12.0 microg x ml(-1) for astaxanthin. The limits of detection of the analytical procedure were found to be 0.35 microg x ml(-1) for chlorophyll and 0.25 microg x ml(-1) for astaxanthin. The relative standard deviations for the determination of 7.0 microg x ml(-1) chlorophyll and 5.0 microg x ml(-1) astaxanthin were 1.2% and 1.1%, respectively. The procedure was found to be simple, rapid, and reliable. This method was successfully applied to the determination of chlorophyll and astaxanthin concentrations in H. pluvialis cells. A good agreement was achieved between the results obtained by the proposed method and HPLC method.

  13. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  14. Radiation Hard Bandpass Filters for Mid- to Far-IR Planetary Instruments

    NASA Technical Reports Server (NTRS)

    Brown, Ari D.; Aslam, Shahid; Chervenack, James A.; Huang, Wei-Chung; Merrell, Willie C.; Quijada, Manuel; Steptoe-Jackson, Rosalind; Wollack, Edward J.

    2012-01-01

    We present a novel method to fabricate compact metal mesh bandpass filters for use in mid- to far-infrared planetary instruments operating in the 20-600 micron wavelength spectral regime. Our target applications include thermal mapping instruments on ESA's JUICE as well as on a de-scoped JEO. These filters are novel because they are compact, customizable, free-standing copper mesh resonant bandpass filters with micromachined silicon support frames. The filters are well suited for thermal mapping mission to the outer planets and their moons because the filter material is radiation hard. Furthermore, the silicon support frame allows for effective hybridization with sensors made on silicon substrates. Using a Fourier Transform Spectrometer, we have demonstrated high transmittance within the passband as well as good out-of-band rejection [1]. In addition, we have developed a unique method of filter stacking in order to increase the bandwidth and sharpen the roll-off of the filters. This method allows one to reliably control the spacing between filters to within 2 microns. Furthermore, our method allows for reliable control over the relative position and orienta-tion between the shared faces of the filters.

  15. Using an Ingestible Telemetric Temperature Pill to Assess Gastrointestinal Temperature During Exercise

    PubMed Central

    Bongers, Coen C.W.G.; Hopman, Maria T.E.; Eijsvogels, Thijs M.H.

    2015-01-01

    Exercise results in an increase in core body temperature (Tc), which may reduce exercise performance and eventually can lead to the development of heat-related disorders. Therefore, accurate measurement of Tc during exercise is of great importance, especially in athletes who have to perform in challenging ambient conditions. In the current literature a number of methods have been described to measure the Tc (esophageal, external tympanic membrane, mouth or rectum). However, these methods are suboptimal to measure Tc during exercise since they are invasive, have a slow response or are influenced by environmental conditions. Studies described the use of an ingestible telemetric temperature pill as a reliable and valid method to assess gastrointestinal temperature (Tgi), which is a representative measurement of Tc. Therefore, the goal of this study was to provide a detailed description of the measurement of Tgi using an ingestible telemetric temperature pill. This study addresses important methodological factors that must be taken into account for an accurate measurement. It is recommended to read the instructions carefully in order to ensure that the ingestible telemetric temperature pill is a reliable method to assess Tgi at rest and during exercise. PMID:26485169

  16. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    NASA Astrophysics Data System (ADS)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  17. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  18. Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2016-05-01

    In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.

  19. Developing safety performance functions incorporating reliability-based risk measures.

    PubMed

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek

    2011-11-01

    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Enhanced parent selection algorithms in mintroute protocol

    NASA Astrophysics Data System (ADS)

    Kim, Ki-Il

    2012-11-01

    A low-rate, short-range wireless radio communication on a small device often hampers high reliability in wireless sensor networks. However, more applications are increasingly demanding high reliability. To meet this requirement, various approaches have been proposed in each viewpoint of layers. Among those, MintRoute is a well-known network layer approach to develop a new metric based on link quality for path selection towards the sink. By choosing the link with the highest measured value, it has a higher possibility to transmit a packet over the link without error. However, there are still several issues to be mentioned during operations. In this paper, we propose how to improve the MintRoute protocol through revised algorithms. They include a parent selection considering distance and level from the sink node, and a fast recovery method against failures. Simulations and analysis are performed by in order to validate the suitability of reduced end-to-end delay and fast recovery for failures, thus to enhance the reliability of communication.

  1. Controlling surface property of K2SiF6:Mn4+ for improvement of lighting-emitting diode reliability

    NASA Astrophysics Data System (ADS)

    Kim, Juseong; Jang, Inseok; Song, Gwang Yeom; Kim, Wan-Ho; Jeon, Sie-Wook; Kim, Jae-Pil

    2018-05-01

    The surface property of moisture-sensitive K2SiF6:Mn4+ (KSF) as a red-emitting phosphor was controlled through dry-type surface modification in order to improve the photo-performance and reliability of lighting-emitting diode (LED). The phosphor surface was modified with silane coupling agents having different carbon chain length by plasma-assisted method. Comparing between as-prepared and modified KSF, water-resistance and photo-emission efficiency were enhanced due to the formation of hydrophobic shell and the elimination of surface quenching sites. Moreover, the dispersibility of phosphor was increased as increasing the carbon chain length of silane because the interfacial affinity between phosphor and encapsulant was improved. After fabricating LED device, the enhancement of photo-performance and long-term reliability could be successfully achieved in LED device with modified phosphor. It is attributed to that the degradation of phosphor efficiency by moisture was suppressed and heat dissipation in LED PKG was improved through the surface modification.

  2. Droplet Digital PCR for Minimal Residual Disease Detection in Mature Lymphoproliferative Disorders.

    PubMed

    Drandi, Daniela; Ferrero, Simone; Ladetto, Marco

    2018-01-01

    Minimal residual disease (MRD) detection has a powerful prognostic relevance for response evaluation and prediction of relapse in hematological malignancies. Real-time quantitative PCR (qPCR) has become the settled and standardized method for MRD assessment in lymphoid disorders. However, qPCR is a relative quantification approach, since it requires a reference standard curve. Droplet digital TM PCR (ddPCR TM ) allows a reliable absolute tumor burden quantification withdrawing the need for preparing, for each experiment, a tumor-specific standard curve. We have recently shown that ddPCR has a good concordance with qPCR and could be a feasible and reliable tool for MRD monitoring in mature lymphoproliferative disorders. In this chapter we describe the experimental workflow, from the detection of the clonal molecular marker to the MRD monitoring by ddPCR, in patients affected by multiple myeloma, mantle cell lymphoma and follicular lymphoma. However, standardization programs among different laboratories are needed in order to ensure the reliability and reproducibility of ddPCR-based MRD results.

  3. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  4. Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Zhenhua; Yan, Binhang; Zhang, Li

    In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.

  5. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  6. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  7. Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation

    DOE PAGES

    Xie, Zhenhua; Yan, Binhang; Zhang, Li; ...

    2017-01-25

    In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.

  8. Probabilistic Multi-Scale, Multi-Level, Multi-Disciplinary Analysis and Optimization of Engine Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2000-01-01

    Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.

  9. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  10. Application of the kinetic and isotherm models for better understanding of the behaviors of silver nanoparticles adsorption onto different adsorbents.

    PubMed

    Syafiuddin, Achmad; Salmiati, Salmiati; Jonbi, Jonbi; Fulazzaky, Mohamad Ali

    2018-07-15

    It is the first time to do investigation the reliability and validity of thirty kinetic and isotherm models for describing the behaviors of adsorption of silver nanoparticles (AgNPs) onto different adsorbents. The purpose of this study is therefore to assess the most reliable models for the adsorption of AgNPs onto feasibility of an adsorbent. The fifteen kinetic models and fifteen isotherm models were used to test secondary data of AgNPs adsorption collected from the various data sources. The rankings of arithmetic mean were estimated based on the six statistical analysis methods of using a dedicated software of the MATLAB Optimization Toolbox with a least square curve fitting function. The use of fractal-like mixed 1, 2-order model for describing the adsorption kinetics and that of Fritz-Schlunder and Baudu models for describing the adsorption isotherms can be recommended as the most reliable models for AgNPs adsorption onto the natural and synthetic adsorbent materials. The application of thirty models have been identified for the adsorption of AgNPs to clarify the usefulness of both groups of the kinetic and isotherm equations in the rank order of the levels of accuracy, and this significantly contributes to understandability and usability of the proper models and makes to knowledge beyond the existing literatures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  12. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  13. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  14. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  15. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  16. Advancing methods for reliably assessing motivational interviewing fidelity using the Motivational Interviewing Skills Code

    PubMed Central

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.

    2014-01-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192

  17. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    PubMed

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Stress and reliability analyses of multilayered composite cylinder under thermal and mechanical loads

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua

    The coupling resulting from the mutual influence of material thermal and mechanical parameters is examined in the thermal stress analysis of a multilayered isotropic composite cylinder subjected to sudden axisymmetric external and internal temperature. The method of complex frequency response functions together with the Fourier transform technique is utilized. Because the coupling parameters for some composite materials, such as carbon-carbon, are very small, the effect of coupling is neglected in the orthotropic thermal stress analysis. The stress distributions in multilayered orthotropic cylinders subjected to sudden axisymmetric temperature loading combined with dynamic pressure as well as asymmetric temperature loading are also obtained. The method of Fourier series together with the Laplace transform is utilized in solving the heat conduction equation and thermal stress analysis. For brittle materials, like carbon-carbon composites, the strength variability is represented by two or three parameter Weibull distributions. The 'weakest link' principle which takes into account both the carbon-carbon composite cylinders. The complex frequency response analysis is performed on a multilayered orthotropic cylinder under asymmetrical thermal load. Both deterministic and random thermal stress and reliability analyses can be based on the results of this frequency response analysis. The stress and displacement distributions and reliability of rocket motors under static or dynamic line loads are analyzed by an elasticity approach. Rocket motors are modeled as long hollow multilayered cylinders with an air core, a thick isotropic propellant inner layer and a thin orthotropic kevlar-epoxy case. The case is treated as a single orthotropic layer or a ten layered orthotropic structure. Five material properties and the load are treated as random variable with normal distributions when the reliability of the rocket motor is analyzed by the first-order, second-moment method (FOSM).

  19. Inter- and intraobserver reliability of the clock face representation as used to describe the femoral intercondylar notch.

    PubMed

    Azzam, Michael G; Lenarz, Christopher J; Farrow, Lutul D; Israel, Heidi A; Kieffer, David A; Kaar, Scott G

    2011-08-01

    To validate the use of the clock face reference as a reliable means of communicating femoral intercondylar notch position. A single red mark was made on ten identical left Sawbones femurs in the intercondylar notch at variable locations. Ten surgeons, who routinely perform ACL reconstructions, were presented the femurs in random order and asked to state the position of the mark to the nearest 30-min interval. Responses were recorded and then repeated 3 weeks later. The same 10 surgeons were presented with 30 actual arthroscopic photographs of the intercondylar notch, performed at 90° of knee flexion, with a probe pointing at various locations (10 knees; 3 photographs/knee) along the lateral aspect of the notch. The results were then analyzed with an ICC, Cronbach's alpha test, and descriptive statistics. For the Sawbones, the ICC was 0.996 while individual physician's Cronbach's alpha test ranged from 0.954 to 0.999, indicating a very high interobserver and intraobserver reliability. The mean range of responses among the 10 surgeons was 1.6 h, SD 0.6. For the photographs, the ICC was also high at 0.997. There was a mean range of 1.1 h, SD 0.4, among surgeons. The clock face method is commonly utilized for both placement of the femoral tunnel during ACL reconstruction as well as describing the location of the ACL femoral tunnel between communicating surgeons. Despite a high statistical interobserver correlation, there is significant range among different surgeons' responses. The present study questions the reliability of the clock face method for use between surgeons as a stand alone tool. Other methods also utilizing anatomic landmarks may be more accurate for describing intercondylar notch anatomy. III.

  20. Uncertainties in (E)UV model atmosphere fluxes

    NASA Astrophysics Data System (ADS)

    Rauch, T.

    2008-04-01

    Context: During the comparison of synthetic spectra calculated with two NLTE model atmosphere codes, namely TMAP and TLUSTY, we encounter systematic differences in the EUV fluxes due to the treatment of level dissolution by pressure ionization. Aims: In the case of Sirius B, we demonstrate an uncertainty in modeling the EUV flux reliably in order to challenge theoreticians to improve the theory of level dissolution. Methods: We calculated synthetic spectra for hot, compact stars using state-of-the-art NLTE model-atmosphere techniques. Results: Systematic differences may occur due to a code-specific cutoff frequency of the H I Lyman bound-free opacity. This is the case for TMAP and TLUSTY. Both codes predict the same flux level at wavelengths lower than about 1500 Å for stars with effective temperatures (T_eff) below about 30 000 K only, if the same cutoff frequency is chosen. Conclusions: The theory of level dissolution in high-density plasmas, which is available for hydrogen only should be generalized to all species. Especially, the cutoff frequencies for the bound-free opacities should be defined in order to make predictions of UV fluxes more reliable.

  1. Verification of transport equations in a general purpose commercial CFD code.

    NASA Astrophysics Data System (ADS)

    Melot, Matthieu; Nennemann, Bernd; Deschênes, Claire

    2016-11-01

    In this paper, the Verification and Validation methodology is presented. This method aims to increase the reliability and the trust that can be placed into complex CFD simulations. The first step of this methodology, the code verification is presented in greater details. The CFD transport equations in steady state, transient and Arbitrary Eulerian Lagrangian (ALE, used for transient moving mesh) formulations in Ansys CFX are verified. It is shown that the expected spatial and temporal order of convergence are achieved for the steady state and the transient formulations. Unfortunately this is not completely the case for the ALE formulation. As for a lot of other commercial and in-house CFD codes, the temporal convergence of the velocity is limited to a first order where a second order would have been expected.

  2. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  3. Extreme learning machine for reduced order modeling of turbulent geophysical flows.

    PubMed

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  4. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    NASA Astrophysics Data System (ADS)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  5. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.

  6. 77 FR 59745 - Delegation of Authority Regarding Electric Reliability Organization's Budget, Delegation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-01

    ...; Order No. 766] Delegation of Authority Regarding Electric Reliability Organization's Budget, Delegation... Electric Reliability Organization (ERO) filings. In particular, this Final Rule transfers delegated... delegation agreements, and ERO policies and procedures. DATES: This rule is effective October 1, 2012. FOR...

  7. Reliability of high-power QCW arrays

    NASA Astrophysics Data System (ADS)

    Feeler, Ryan; Junghans, Jeremy; Remley, Jennifer; Schnurbusch, Don; Stephens, Ed

    2010-02-01

    Northrop Grumman Cutting Edge Optronics has developed a family of arrays for high-power QCW operation. These arrays are built using CTE-matched heat sinks and hard solder in order to maximize the reliability of the devices. A summary of a recent life test is presented in order to quantify the reliability of QCW arrays and associated laser gain modules. A statistical analysis of the raw lifetime data is presented in order to quantify the data in such a way that is useful for laser system designers. The life tests demonstrate the high level of reliability of these arrays in a number of operating regimes. For single-bar arrays, a MTTF of 19.8 billion shots is predicted. For four-bar samples, a MTTF of 14.6 billion shots is predicted. In addition, data representing a large pump source is analyzed and shown to have an expected lifetime of 13.5 billion shots. This corresponds to an expected operational lifetime of greater than ten thousand hours at repetition rates less than 370 Hz.

  8. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  9. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  10. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  11. Higher-order kinetic expansion of quantum dissipative dynamics: mapping quantum networks to kinetic networks.

    PubMed

    Wu, Jianlan; Cao, Jianshu

    2013-07-28

    We apply a new formalism to derive the higher-order quantum kinetic expansion (QKE) for studying dissipative dynamics in a general quantum network coupled with an arbitrary thermal bath. The dynamics of system population is described by a time-convoluted kinetic equation, where the time-nonlocal rate kernel is systematically expanded of the order of off-diagonal elements of the system Hamiltonian. In the second order, the rate kernel recovers the expression of the noninteracting-blip approximation method. The higher-order corrections in the rate kernel account for the effects of the multi-site quantum coherence and the bath relaxation. In a quantum harmonic bath, the rate kernels of different orders are analytically derived. As demonstrated by four examples, the higher-order QKE can reliably predict quantum dissipative dynamics, comparing well with the hierarchic equation approach. More importantly, the higher-order rate kernels can distinguish and quantify distinct nontrivial quantum coherent effects, such as long-range energy transfer from quantum tunneling and quantum interference arising from the phase accumulation of interactions.

  12. Fault Tree Analysis for an Inspection Robot in a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ferguson, Thomas A.; Lu, Lixuan

    2017-09-01

    The life extension of current nuclear reactors has led to an increasing demand on inspection and maintenance of critical reactor components that are too expensive to replace. To reduce the exposure dosage to workers, robotics have become an attractive alternative as a preventative safety tool in nuclear power plants. It is crucial to understand the reliability of these robots in order to increase the veracity and confidence of their results. This study presents the Fault Tree (FT) analysis to a coolant outlet piper snake-arm inspection robot in a nuclear power plant. Fault trees were constructed for a qualitative analysis to determine the reliability of the robot. Insight on the applicability of fault tree methods for inspection robotics in the nuclear industry is gained through this investigation.

  13. A SINS/SRS/GNS Autonomous Integrated Navigation System Based on Spectral Redshift Velocity Measurements.

    PubMed

    Wei, Wenhui; Gao, Zhaohui; Gao, Shesheng; Jia, Ke

    2018-04-09

    In order to meet the requirements of autonomy and reliability for the navigation system, combined with the method of measuring speed by using the spectral redshift information of the natural celestial bodies, a new scheme, consisting of Strapdown Inertial Navigation System (SINS)/Spectral Redshift (SRS)/Geomagnetic Navigation System (GNS), is designed for autonomous integrated navigation systems. The principle of this SINS/SRS/GNS autonomous integrated navigation system is explored, and the corresponding mathematical model is established. Furthermore, a robust adaptive central difference particle filtering algorithm is proposed for this autonomous integrated navigation system. The simulation experiments are conducted and the results show that the designed SINS/SRS/GNS autonomous integrated navigation system possesses good autonomy, strong robustness and high reliability, thus providing a new solution for autonomous navigation technology.

  14. A mature Bosch CO2 reduction technology. [for long-duration space missions

    NASA Technical Reports Server (NTRS)

    King, C. D.; Holmes, R. F.

    1976-01-01

    The reduction of CO2 is one of the steps in closing the oxygen loop for long-duration manned space missions. Several units utilizing the Bosch process, which catalytically reduces CO2 with hydrogen, have been built and operated during the past decade. Each contributed substantial information affecting subsequent designs. Early challenges were primarily concerned with carbon control, materials durability, and reliability of reaction initiation. These were followed by concern about power consumption, expendable weight, volume, and process rate control. Suitable materials and techniques for carbon containment and process reliability have been demonstrated. Power requirements have been reduced by almost an order of magnitude. Methods for significant reductions in expendable weight and volume have been developed. The technology is at a state of maturity directly applicable to designs for space missions.

  15. Novel Strength Test Battery to Permit Evidence-Based Paralympic Classification

    PubMed Central

    Beckman, Emma M.; Newcombe, Peter; Vanlandewijck, Yves; Connick, Mark J.; Tweedy, Sean M.

    2014-01-01

    Abstract Ordinal-scale strength assessment methods currently used in Paralympic athletics classification prevent the development of evidence-based classification systems. This study evaluated a battery of 7, ratio-scale, isometric tests with the aim of facilitating the development of evidence-based methods of classification. This study aimed to report sex-specific normal performance ranges, evaluate test–retest reliability, and evaluate the relationship between the measures and body mass. Body mass and strength measures were obtained from 118 participants—63 males and 55 females—ages 23.2 years ± 3.7 (mean ± SD). Seventeen participants completed the battery twice to evaluate test–retest reliability. The body mass–strength relationship was evaluated using Pearson correlations and allometric exponents. Conventional patterns of force production were observed. Reliability was acceptable (mean intraclass correlation = 0.85). Eight measures had moderate significant correlations with body size (r = 0.30–61). Allometric exponents were higher in males than in females (mean 0.99 vs 0.30). Results indicate that this comprehensive and parsimonious battery is an important methodological advance because it has psychometric properties critical for the development of evidence-based classification. Measures were interrelated with body size, indicating further research is required to determine whether raw measures require normalization in order to be validly applied in classification. PMID:25068950

  16. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  17. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  18. 75 FR 35689 - System Personnel Training Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... using realistic simulations.\\14\\ \\13\\ Id. P 1331. \\14\\ Reliability Standard PER-002-0. 9. In Order No... development process to: (1) Include formal training requirements for reliability coordinators similar to those... simulation technology such as a simulator, virtual technology, or other technology in their emergency...

  19. 75 FR 30008 - Commission Information Collection Activities; Comment Request; Extension

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... by requiring public utilities, working through the North American Electric Reliability Corporation.... The Commission required a NERC/NAESB \\2\\ team to draft and review Order No. 890 reliability standards... NERC, were to revise reliability standards to require the exchange of data and coordination among...

  20. Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method

    NASA Astrophysics Data System (ADS)

    Yuanyue, Yang; Huimin, Li

    2018-02-01

    Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.

  1. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  2. A class of traveling wave solutions for space-time fractional biological population model in mathematical physics

    NASA Astrophysics Data System (ADS)

    Akram, Ghazala; Batool, Fiza

    2017-10-01

    The (G'/G)-expansion method is utilized for a reliable treatment of space-time fractional biological population model. The method has been applied in the sense of the Jumarie's modified Riemann-Liouville derivative. Three classes of exact traveling wave solutions, hyperbolic, trigonometric and rational solutions of the associated equation are characterized with some free parameters. A generalized fractional complex transform is applied to convert the fractional equations to ordinary differential equations which subsequently resulted in number of exact solutions. It should be mentioned that the (G'/G)-expansion method is very effective and convenient for solving nonlinear partial differential equations of fractional order whose balancing number is a negative integer.

  3. The 1972 Westover Air Force Base Food Preference Survey and Reliability Study

    DTIC Science & Technology

    1974-02-01

    Preferred Foods on BOTH Scales on BOTH Scales Milk Butterscotch Sauce Orange Juice Parsnips Eggs to Order Kidney Bean Salad Grilled Steak Raisin Pie...Knickerbocker Soup Chocolate Milk Mashed Potatoes ’Taken from the top/bottom 25 lists of Both scales There are several additional methods of evaluating...the factors are not highly correlated (26); they appear to be measuring different things. For example, grilled steak and milk are two very popular

  4. A Designers’ Guide to Reliable Distributed Systems: Design and Analysis Methods. An Example Design. Volume 1

    DTIC Science & Technology

    1988-08-01

    exchanged between the cells, thus requiring existence of fast , high capacity, high availability communication channels. The same arguments indicate...mininet - loss of a cell - intermittent communications failure in the maxinet - partitioning of the maxinet or the mininet o Query decomposition. 3.3...take place. A new sequencer is selected by the timeout mechanism described above. This process Pj must set its priority to 0 in order to ensure fast

  5. Heterophoria and fixation disparity: a review.

    PubMed

    Kommerell, G; Gerling, J; Ball, M; de Paz, H; Bach, M

    2000-06-01

    Heterophoria does not provide a reliable clue for ordering prisms in an asthenopic patient. The same reservation applies to associated phoria, as determined by prism correction of fixation disparity. Subjective tests for fixation disparity, even those with a fusionable fixation target, do not correctly indicate the vergence position of the eyes under natural viewing conditions. Attempts to measure fixation disparity on the basis of stereo disparity, using the "Measuring and Correction Methods of H.-J. Haase", have failed.

  6. An N-targeting real-time PCR strategy for the accurate detection of spring viremia of carp virus.

    PubMed

    Shao, Ling; Xiao, Yu; He, Zhengkan; Gao, Longying

    2016-03-01

    Spring viremia of carp virus (SVCV) is a highly pathogenic agent of several economically important Cyprinidae fish species. Currently, there are no effective vaccines or drugs for this virus, and prevention of the disease mostly relies on prompt diagnosis. Previously, nested RT-PCR and RT-qPCR detection methods based on the glycoprotein gene G have been developed. However, the high genetic diversity of the G gene seriously limits the reliability of those methods. Compared with the G gene, phylogenetic analyses indicate that the nucleoprotein gene N is more conserved. Furthermore, studies in other members of the Rhabdoviridae family reveals that their gene transcription level follows the order N>P>M>G>L, indicating that an N gene based RT-PCR should have higher sensitivity. Therefore, two pairs of primers and two corresponding probes targeting the conserved regions of the N gene were designed. RT-qPCR assays demonstrated all primers and probes could detect phylogenetically distant isolates specifically and efficiently. Moreover, in artificially infected fish, the detected copy numbers of the N gene were much higher than those of the G gene in all tissues, and both the N and G gene copy numbers were highest in the kidney and spleen. Testing in 1100 farm-raised fish also showed that the N-targeting strategy was more reliable than the G-targeting methods. The method developed in this study provides a reliable tool for the rapid diagnosis of SVCV. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership

    PubMed Central

    2014-01-01

    Background In healthcare and allied healthcare settings, leadership that supports effective implementation of evidenced-based practices (EBPs) is a critical concern. However, there are no empirically validated measures to assess implementation leadership. This paper describes the development, factor structure, and initial reliability and convergent and discriminant validity of a very brief measure of implementation leadership: the Implementation Leadership Scale (ILS). Methods Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Initial item development was supported as part of a two United States National Institutes of Health (NIH) studies focused on developing implementation leadership training and implementation measure development. Clinician work group/team-level data were randomly assigned to be utilized for an exploratory factor analysis (n = 229; k = 46 teams) or for a confirmatory factor analysis (n = 230; k = 47 teams). The confirmatory factor analysis controlled for the multilevel, nested data structure. Reliability and validity analyses were then conducted with the full sample. Results The exploratory factor analysis resulted in a 12-item scale with four subscales representing proactive leadership, knowledgeable leadership, supportive leadership, and perseverant leadership. Confirmatory factor analysis supported an a priori higher order factor structure with subscales contributing to a single higher order implementation leadership factor. The scale demonstrated excellent internal consistency reliability as well as convergent and discriminant validity. Conclusions The ILS is a brief and efficient measure of unit level leadership for EBP implementation. The availability of the ILS will allow researchers to assess strategic leadership for implementation in order to advance understanding of leadership as a predictor of organizational context for implementation. The ILS also holds promise as a tool for leader and organizational development to improve EBP implementation. PMID:24731295

  8. An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems

    NASA Astrophysics Data System (ADS)

    Davey, A.

    1983-08-01

    A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.

  9. Road extraction from aerial images using a region competition algorithm.

    PubMed

    Amo, Miriam; Martínez, Fernando; Torre, Margarita

    2006-05-01

    In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.

  10. Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Good, Brian; Ferrante, John

    1996-01-01

    Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).

  11. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  12. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump.

    PubMed

    Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K

    2017-03-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen's d >1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.

  13. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump

    PubMed Central

    Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K

    2016-01-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen’s d>1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900

  14. The test-retest reliability of the latent construct of executive function depends on whether tasks are represented as formative or reflective indicators.

    PubMed

    Willoughby, Michael T; Kuhn, Laura J; Blair, Clancy B; Samek, Anya; List, John A

    2017-10-01

    This study investigates the test-retest reliability of a battery of executive function (EF) tasks with a specific interest in testing whether the method that is used to create a battery-wide score would result in differences in the apparent test-retest reliability of children's performance. A total of 188 4-year-olds completed a battery of computerized EF tasks twice across a period of approximately two weeks. Two different approaches were used to create a score that indexed children's overall performance on the battery-i.e., (1) the mean score of all completed tasks and (2) a factor score estimate which used confirmatory factor analysis (CFA). Pearson and intra-class correlations were used to investigate the test-retest reliability of individual EF tasks, as well as an overall battery score. Consistent with previous studies, the test-retest reliability of individual tasks was modest (rs ≈ .60). The test-retest reliability of the overall battery scores differed depending on the scoring approach (r mean  = .72; r factor_ score  = .99). It is concluded that the children's performance on individual EF tasks exhibit modest levels of test-retest reliability. This underscores the importance of administering multiple tasks and aggregating performance across these tasks in order to improve precision of measurement. However, the specific strategy that is used has a large impact on the apparent test-retest reliability of the overall score. These results replicate our earlier findings and provide additional cautionary evidence against the routine use of factor analytic approaches for representing individual performance across a battery of EF tasks.

  15. The Reliability of Individualized Load-Velocity Profiles.

    PubMed

    Banyard, Harry G; Nosaka, K; Vernon, Alex D; Haff, G Gregory

    2017-11-15

    This study examined the reliability of peak velocity (PV), mean propulsive velocity (MPV), and mean velocity (MV) in the development of load-velocity profiles (LVP) in the full depth free-weight back squat performed with maximal concentric effort. Eighteen resistance-trained men performed a baseline one-repetition maximum (1RM) back squat trial and three subsequent 1RM trials used for reliability analyses, with 48-hours interval between trials. 1RM trials comprised lifts from six relative loads including 20, 40, 60, 80, 90, and 100% 1RM. Individualized LVPs for PV, MPV, or MV were derived from loads that were highly reliable based on the following criteria: intra-class correlation coefficient (ICC) >0.70, coefficient of variation (CV) ≤10%, and Cohen's d effect size (ES) <0.60. PV was highly reliable at all six loads. Importantly, MPV and MV were highly reliable at 20, 40, 60, 80 and 90% but not 100% 1RM (MPV: ICC=0.66, CV=18.0%, ES=0.10, standard error of the estimate [SEM]=0.04m·s -1 ; MV: ICC=0.55, CV=19.4%, ES=0.08, SEM=0.04m·s -1 ). When considering the reliable ranges, almost perfect correlations were observed for LVPs derived from PV 20-100% (r=0.91-0.93), MPV 20-90% (r=0.92-0.94) and MV 20-90% (r=0.94-0.95). Furthermore, the LVPs were not significantly different (p>0.05) between trials, movement velocities, or between linear regression versus second order polynomial fits. PV 20-100% , MPV 20-90% , and MV 20-90% are reliable and can be utilized to develop LVPs using linear regression. Conceptually, LVPs can be used to monitor changes in movement velocity and employed as a method for adjusting sessional training loads according to daily readiness.

  16. 76 FR 23801 - North American Electric Reliability Corporation; Order Approving Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... primary control functionality. Nonetheless, until data from drills, exercises and tests can support a... control center becomes inoperable and to conduct reviews and tests, at least annually, to ensure viability... Preparedness and Operations (EOP) Reliability Standard EOP- 008-1 (Loss of Control Center Functionality). The...

  17. 75 FR 51027 - Commission Information Collection Activities (FERC-917 and FERC-918) 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... public utilities, working through the North American Electric Reliability Corporation (NERC), to develop... required a NERC/NAESB \\3\\ team to draft and review Order No. 890 reliability standards and business... revise reliability standards to require the exchange of data and coordination among transmission...

  18. 77 FR 6550 - Commission Information Collection Activities (FERC-725A); Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... submitting the information collection FERC-725A, Mandatory Reliability Standards for the Bulk- Power System... Reliability Standard, in an order published in the Federal Register (76 FR 72197, 11/22/2011). FERC received...) 273-0873. SUPPLEMENTARY INFORMATION: Title: FERC-725A, Mandatory Reliability Standards for the Bulk...

  19. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  20. Development and preliminary validation of a questionnaire to measure satisfaction with home care in Greece: an exploratory factor analysis of polychoric correlations

    PubMed Central

    2010-01-01

    Background The primary aim of this study was to develop and psychometrically test a Greek-language instrument for measuring satisfaction with home care. The first empirical evidence about the level of satisfaction with these services in Greece is also provided. Methods The questionnaire resulted from literature search, on-site observation and cognitive interviews. It was applied in 2006 to a sample of 201 enrollees of five home care programs in the city of Thessaloniki and contains 31 items that measure satisfaction with individual service attributes and are expressed on a 5-point Likert scale. The latter has been usually considered in practice as an interval scale, although it is in principle ordinal. We thus treated the variable as an ordinal one, but also employed the traditional approach in order to compare the findings. Our analysis was therefore based on ordinal measures such as the polychoric correlation, Kendall's Tau b coefficient and ordinal Cronbach's alpha. Exploratory factor analysis was followed by an assessment of internal consistency reliability, test-retest reliability, construct validity and sensitivity. Results Analyses with ordinal and interval scale measures produced in essence very similar results and identified four multi-item scales. Three of these were found to be reliable and valid: socioeconomic change, staff skills and attitudes and service appropriateness. A fourth dimension -service planning- had lower internal consistency reliability and yet very satisfactory test-retest reliability, construct validity and floor and ceiling effects. The global satisfaction scale created was also quite reliable. Overall, participants were satisfied -yet not very satisfied- with home care services. More room for improvement seems to exist for the socio-economic and planning aspects of care and less for staff skills and attitudes and appropriateness of provided services. Conclusions The methods developed seem to be a promising tool for the measurement of home care satisfaction in Greece. PMID:20602759

  1. Reliability of a survey tool for measuring consumer nutrition environment in urban food stores.

    PubMed

    Hosler, Akiko S; Dharssi, Aliza

    2011-01-01

    Despite the increase in the volume and importance of food environment research, there is a general lack of reliable measurement tools. This study presents the development and reliability assessment of a tool for measuring consumer nutrition environment in urban food stores. Cross-sectional design. A racially diverse downtown portion (6 ZIP code areas) in Albany, New York. A sample of 39 food stores was visited by our research team in 2009 to 2010. These stores were randomly selected from 123 eligible food stores identified through multiple government lists and ground-truthing. The Food Retail Outlet Survey Tool was developed to assess the presence of selected food and nonfood items, placement, milk prices, physical characteristics of the store, policy implementation, and advertisements on outside windows. For in-store items, agreement of observations between experienced and lightly trained surveyors was assessed. For window advertisement assessments, inter-method agreement (on-site sketch vs digital photo), and inter-rater agreement (both on-site) among lightly trained surveyors were evaluated. Percent agreement, Kappa, and prevalence-adjusted bias-adjusted kappa were calculated for in-store observations. Interclass correlation coefficients were calculated for window observations. Twenty-seven of the 47 in-store items had 100% agreement. The prevalence-adjusted bias-adjusted kappa indicated excellent agreement (≥0.90) on all items, except aisle width (0.74) and dark-green/orange colored fresh vegetables (0.85). The store type (nonconvenience store), the order of visits (first half), and the time to complete survey (>10 minutes) were associated with lower reliability in these 2 items. Both the inter-method and inter-rater agreements for window advertisements were uniformly high (intraclass correlation coefficient ranged 0.94-1.00), indicating high reliability. The Food Retail Outlet Survey Tool is a reliable tool for quickly measuring consumer nutrition environment. It can be effectively used by an individual who attended a 30-minute group briefing and practiced with 3 to 4 stores.

  2. Geometric classification of scalp hair for valid drug testing, 6 more reliable than 8 hair curl groups

    PubMed Central

    Mkentane, K.; Gumedze, F.; Ngoepe, M.; Davids, L. M.; Khumalo, N. P.

    2017-01-01

    Introduction Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). Materials and methods After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Results Each rater classified 480 hairs on each occasion. No rater classified any volunteer’s 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Conclusions Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine. PMID:28570555

  3. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  4. Exploring the boundary between aromatic and olefinic character: Bad news for second-order perturbation theory and density functional schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulzbach, H.M.; Schaefer, H.F. III; Klopper, W.

    1996-04-10

    The question whether [10]annulene prefers olefinic structures with alternate single and double bonds or aromatic structures like all other small to medium sized uncharged (4n + 2){pi} electron homologs (e.g. benzene, [14]annulene) has been controversial for more than 20 years. Our new results suggest that only the high-order correlated methods will be able to correctly predict the [10]annulene potential energy surface. The UNO-CAS results and the strong oscillation of the MP series show that nondynamical electron correlation is important. Consequently, reliable results can only be expected at the highest correlated levels like CCSD(T) method, which predicts the olefinic twist structuremore » to be lower in energy by 3-7 kcal/mol. This prediction that the twist structure is lower in energy is supported by (a) the MP2-R12 method, which shows that large basis sets favor the olefinic structure relative to the aromatic, and (b) the fact that both structures are about equally affected by nondynamical electron correlation. We conclude that [10]annulene is a system which cannot be described adequately by either second-order Moller-Plesset perturbation theory or density functional methods. 13 refs., 3 tabs.« less

  5. Robust fractional order sliding mode control of doubly-fed induction generator (DFIG)-based wind turbines.

    PubMed

    Ebrahimkhani, Sadegh

    2016-07-01

    Wind power plants have nonlinear dynamics and contain many uncertainties such as unknown nonlinear disturbances and parameter uncertainties. Thus, it is a difficult task to design a robust reliable controller for this system. This paper proposes a novel robust fractional-order sliding mode (FOSM) controller for maximum power point tracking (MPPT) control of doubly fed induction generator (DFIG)-based wind energy conversion system. In order to enhance the robustness of the control system, uncertainties and disturbances are estimated using a fractional order uncertainty estimator. In the proposed method a continuous control strategy is developed to achieve the chattering free fractional order sliding-mode control, and also no knowledge of the uncertainties and disturbances or their bound is assumed. The boundedness and convergence properties of the closed-loop signals are proven using Lyapunov׳s stability theory. Simulation results in the presence of various uncertainties were carried out to evaluate the effectiveness and robustness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    PubMed

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  7. Position and speed control of brushless DC motors using sensorless techniques and application trends.

    PubMed

    Gamazo-Real, José Carlos; Vázquez-Sánchez, Ernesto; Gómez-Gil, Jaime

    2010-01-01

    This paper provides a technical review of position and speed sensorless methods for controlling Brushless Direct Current (BLDC) motor drives, including the background analysis using sensors, limitations and advances. The performance and reliability of BLDC motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. Then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. The study includes a deep overview of state-of-the-art back-EMF sensing methods, which includes Terminal Voltage Sensing, Third Harmonic Voltage Integration, Terminal Current Sensing, Back-EMF Integration and PWM strategies. Also, the most relevant techniques based on estimation and models are briefly analysed, such as Sliding-mode Observer, Extended Kalman Filter, Model Reference Adaptive System, Adaptive observers (Full-order and Pseudoreduced-order) and Artificial Neural Networks.

  8. A weight-of-evidence approach to assess chemicals: case study on the assessment of persistence of 4,6-substituted phenolic benzotriazoles in the environment.

    PubMed

    Brandt, Marc; Becker, Eva; Jöhncke, Ulrich; Sättler, Daniel; Schulte, Christoph

    2016-01-01

    One important purpose of the European REACH Regulation (EC No. 1907/2006) is to promote the use of alternative methods for assessment of hazards of substances in order to avoid animal testing. Experience with environmental hazard assessment under REACH shows that efficient alternative methods are needed in order to assess chemicals when standard test data are missing. One such assessment method is the weight-of-evidence (WoE) approach. In this study, the WoE approach was used to assess the persistence of certain phenolic benzotriazoles, a group of substances including also such of very high concern (SVHC). For phenolic benzotriazoles, assessment of the environmental persistence is challenging as standard information, i.e. simulation tests on biodegradation are not available. Thus, the WoE approach was used: overall information resulting from many sources was considered, and individual uncertainties of each source analysed separately. In a second step, all information was aggregated giving an overall picture of persistence to assess the degradability of the phenolic benzotriazoles under consideration although the reliability of individual sources was incomplete. Overall, the evidence suggesting that phenolic benzotriazoles are very persistent in the environment is unambiguous. This was demonstrated by a WoE approach considering the prerequisites of REACH by combining several limited information sources. The combination enabled a clear overall assessment which can be reliably used for SVHC identification. Finally, it is recommended to include WoE approaches as an important tool in future environmental risk assessments.

  9. Pre-processing ambient noise cross-correlations with equalizing the covariance matrix eigenspectrum

    NASA Astrophysics Data System (ADS)

    Seydoux, Léonard; de Rosny, Julien; Shapiro, Nikolai M.

    2017-09-01

    Passive imaging techniques from ambient seismic noise requires a nearly isotropic distribution of the noise sources in order to ensure reliable traveltime measurements between seismic stations. However, real ambient seismic noise often partially fulfils this condition. It is generated in preferential areas (in deep ocean or near continental shores), and some highly coherent pulse-like signals may be present in the data such as those generated by earthquakes. Several pre-processing techniques have been developed in order to attenuate the directional and deterministic behaviour of this real ambient noise. Most of them are applied to individual seismograms before cross-correlation computation. The most widely used techniques are the spectral whitening and temporal smoothing of the individual seismic traces. We here propose an additional pre-processing to be used together with the classical ones, which is based on the spatial analysis of the seismic wavefield. We compute the cross-spectra between all available stations pairs in spectral domain, leading to the data covariance matrix. We apply a one-bit normalization to the covariance matrix eigenspectrum before extracting the cross-correlations in the time domain. The efficiency of the method is shown with several numerical tests. We apply the method to the data collected by the USArray, when the M8.8 Maule earthquake occurred on 2010 February 27. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable traveltime measurement even in the presence of the earthquake.

  10. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  11. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  12. Electronic versus paper-based assessment of health-related quality of life specific to HIV disease: reliability study of the PROQOL-HIV questionnaire.

    PubMed

    Duracinsky, Martin; Lalanne, Christophe; Goujard, Cécile; Herrmann, Susan; Cheung-Lung, Christian; Brosseau, Jean-Paul; Schwartz, Yannick; Chassany, Olivier

    2014-04-25

    Electronic patient-reported outcomes (PRO) provide quick and usually reliable assessments of patients' health-related quality of life (HRQL). An electronic version of the Patient-Reported Outcomes Quality of Life-human immunodeficiency virus (PROQOL-HIV) questionnaire was developed, and its face validity and reliability were assessed using standard psychometric methods. A sample of 80 French outpatients (66% male, 52/79; mean age 46.7 years, SD 10.9) were recruited. Paper-based and electronic questionnaires were completed in a randomized crossover design (2-7 day interval). Biomedical data were collected. Questionnaire version and order effects were tested on full-scale scores in a 2-way ANOVA with patients as random effects. Test-retest reliability was evaluated using Pearson and intraclass correlation coefficients (ICC, with 95% confidence interval) for each dimension. Usability testing was carried out from patients' survey reports, specifically, general satisfaction, ease of completion, quality and clarity of user interface, and motivation to participate in follow-up PROQOL-HIV electronic assessments. Questionnaire version and administration order effects (N=59 complete cases) were not significant at the 5% level, and no interaction was found between these 2 factors (P=.94). Reliability indexes were acceptable, with Pearson correlations greater than .7 and ICCs ranging from .708 to .939; scores were not statistically different between the two versions. A total of 63 (79%) complete patients' survey reports were available, and 55% of patients (30/55) reported being satisfied and interested in electronic assessment of their HRQL in clinical follow-up. Individual ratings of PROQOL-HIV user interface (85%-100% of positive responses) confirmed user interface clarity and usability. The electronic PROQOL-HIV introduces minor modifications to the original paper-based version, following International Society for Pharmacoeconomics and Outcomes Research (ISPOR) ePRO Task Force guidelines, and shows good reliability and face validity. Patients can complete the computerized PROQOL-HIV questionnaire and the scores from the paper or electronic versions share comparable accuracy and interpretation.

  13. Assessing communication quality of consultations in primary care: initial reliability of the Global Consultation Rating Scale, based on the Calgary-Cambridge Guide to the Medical Interview

    PubMed Central

    Burt, Jenni; Abel, Gary; Elmore, Natasha; Campbell, John; Roland, Martin; Benson, John; Silverman, Jonathan

    2014-01-01

    Objectives To investigate initial reliability of the Global Consultation Rating Scale (GCRS: an instrument to assess the effectiveness of communication across an entire doctor–patient consultation, based on the Calgary-Cambridge guide to the medical interview), in simulated patient consultations. Design Multiple ratings of simulated general practitioner (GP)–patient consultations by trained GP evaluators. Setting UK primary care. Participants 21 GPs and six trained GP evaluators. Outcome measures GCRS score. Methods 6 GP raters used GCRS to rate randomly assigned video recordings of GP consultations with simulated patients. Each of the 42 consultations was rated separately by four raters. We considered whether a fixed difference between scores had the same meaning at all levels of performance. We then examined the reliability of GCRS using mixed linear regression models. We augmented our regression model to also examine whether there were systematic biases between the scores given by different raters and to look for possible order effects. Results Assessing the communication quality of individual consultations, GCRS achieved a reliability of 0.73 (95% CI 0.44 to 0.79) for two raters, 0.80 (0.54 to 0.85) for three and 0.85 (0.61 to 0.88) for four. We found an average difference of 1.65 (on a 0–10 scale) in the scores given by the least and most generous raters: adjusting for this evaluator bias increased reliability to 0.78 (0.53 to 0.83) for two raters; 0.85 (0.63 to 0.88) for three and 0.88 (0.69 to 0.91) for four. There were considerable order effects, with later consultations (after 15–20 ratings) receiving, on average, scores more than one point higher on a 0–10 scale. Conclusions GCRS shows good reliability with three raters assessing each consultation. We are currently developing the scale further by assessing a large sample of real-world consultations. PMID:24604483

  14. Cyclic Solvent Vapor Annealing for Rapid, Robust Vertical Orientation of Features in BCP Thin Films

    NASA Astrophysics Data System (ADS)

    Paradiso, Sean; Delaney, Kris; Fredrickson, Glenn

    2015-03-01

    Methods for reliably controlling block copolymer self assembly have seen much attention over the past decade as new applications for nanostructured thin films emerge in the fields of nanopatterning and lithography. While solvent assisted annealing techniques are established as flexible and simple methods for achieving long range order, solvent annealing alone exhibits a very weak thermodynamic driving force for vertically orienting domains with respect to the free surface. To address the desire for oriented features, we have investigated a cyclic solvent vapor annealing (CSVA) approach that combines the mobility benefits of solvent annealing with selective stress experienced by structures oriented parallel to the free surface as the film is repeatedly swollen with solvent and dried. Using dynamical self-consistent field theory (DSCFT) calculations, we establish the conditions under which the method significantly outperforms both static and cyclic thermal annealing and implicate the orientation selection as a consequence of the swelling/deswelling process. Our results suggest that CSVA may prove to be a potent method for the rapid formation of highly ordered, vertically oriented features in block copolymer thin films.

  15. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  16. High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications

    NASA Technical Reports Server (NTRS)

    Callahan, John; Montgomery, Todd; Whetten, Brian

    1997-01-01

    Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.

  17. Spectroscopic and Spectrometric Methods Used for the Screening of Certain Herbal Food Supplements Suspected of Adulteration

    PubMed Central

    Mateescu, Cristina; Popescu, Anca Mihaela; Radu, Gabriel Lucian; Onisei, Tatiana; Raducanu, Adina Elena

    2017-01-01

    Purpose: This study was carried out in order to find a reliable method for the fast detection of adulterated herbal food supplements with sexual enhancement claims. As some herbal products are advertised as "all natural", their "efficiency" is often increased by addition of active pharmaceutical ingredients such as PDE-5 inhibitors, which can be a real health threat for the consumer. Methodes: Adulterants, potentially present in 50 herbal food supplements with sexual improvement claims, were detected using 2 spectroscopic methods - Raman and Fourier Transform Infrared - known for reliability, reproductibility, and an easy sample preparation. GC-MS technique was used to confirm the potential adulterants spectra. Results: About 22% (11 out of 50 samples) of herbal food supplements with sexual enhancement claims analyzed by spectroscopic and spectrometric methods proved to be "enriched" with active pharmaceutical compounds such as: sildenafil and two of its analogues, tadalafil and phenolphthalein. The occurence of phenolphthalein could be the reason for the non-relevant results obtained by FTIR method in some samples. 91% of the adulterated herbal food supplements were originating from China. Conclusion: The results of this screening highlighted the necessity for an accurate analysis of all alleged herbal aphrodisiacs on the Romanian market. This is a first such a screening analysis carried out on herbal food supplements with sexual enhancement claims. PMID:28761827

  18. Spectroscopic and Spectrometric Methods Used for the Screening of Certain Herbal Food Supplements Suspected of Adulteration.

    PubMed

    Mateescu, Cristina; Popescu, Anca Mihaela; Radu, Gabriel Lucian; Onisei, Tatiana; Raducanu, Adina Elena

    2017-06-01

    Purpose: This study was carried out in order to find a reliable method for the fast detection of adulterated herbal food supplements with sexual enhancement claims. As some herbal products are advertised as "all natural", their "efficiency" is often increased by addition of active pharmaceutical ingredients such as PDE-5 inhibitors, which can be a real health threat for the consumer. Methodes: Adulterants, potentially present in 50 herbal food supplements with sexual improvement claims, were detected using 2 spectroscopic methods - Raman and Fourier Transform Infrared - known for reliability, reproductibility, and an easy sample preparation. GC-MS technique was used to confirm the potential adulterants spectra. Results: About 22% (11 out of 50 samples) of herbal food supplements with sexual enhancement claims analyzed by spectroscopic and spectrometric methods proved to be "enriched" with active pharmaceutical compounds such as: sildenafil and two of its analogues, tadalafil and phenolphthalein. The occurence of phenolphthalein could be the reason for the non-relevant results obtained by FTIR method in some samples. 91% of the adulterated herbal food supplements were originating from China. Conclusion: The results of this screening highlighted the necessity for an accurate analysis of all alleged herbal aphrodisiacs on the Romanian market. This is a first such a screening analysis carried out on herbal food supplements with sexual enhancement claims.

  19. A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.

    PubMed

    Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing

    2018-04-10

    In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.

  20. SPITZER SECONDARY ECLIPSE DEPTHS WITH MULTIPLE INTRAPIXEL SENSITIVITY CORRECTION METHODS OBSERVATIONS OF WASP-13b, WASP-15b, WASP-16b, WASP-62b, AND HAT-P-22b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilpatrick, Brian M.; Tucker, Gregory S.; Lewis, Nikole K.

    2017-01-01

    We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for themore » intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.« less

  1. Spitzer Secondary Eclipse Depths with Multiple Intrapixel Sensitivity Correction Methods Observations of WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.

    2017-01-01

    We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.

  2. Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.

    PubMed

    Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David

    2015-08-01

    Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).

  3. Confirmatory Factor Analysis of the Malay Version of the Confusion, Hubbub and Order Scale (CHAOS-6) among Myocardial Infarction Survivors in a Malaysian Cardiac Healthcare Facility.

    PubMed

    Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul

    2017-08-01

    The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version's validity. Composite reliability was tested to determine the scale's reliability. All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6) , for the Malaysian population.

  4. Optimized production planning model for a multi-plant cultivation system under uncertainty

    NASA Astrophysics Data System (ADS)

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  5. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  6. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  7. 78 FR 71448 - Regional Reliability Standard BAL-002-WECC-2-Contingency Reserve

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission 18 CFR Part 40 [Docket No. RM13-13-000; Order No. 789] Regional Reliability Standard BAL-002-WECC-2--Contingency Reserve AGENCY: Federal Energy..., and Tony Clark. Regional Reliability Standard BAL-002-WECC-2--Contingency Reserve Docket No. RM13-13...

  8. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    ERIC Educational Resources Information Center

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  9. 78 FR 58295 - Commission Information Collection Activities (FERC-725A); Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-23

    ... submitting the information collection FERC-725A, Mandatory Reliability Standards for the Bulk Power System... collection analysis associated with its approval of Reliability Standard EOP-004-2, in an order published in... solicitation and is making this notation in its submission to OMB. \\1\\ North American Electric Reliability Corp...

  10. On the feasibility of sub-100 nm rad emittance measurement in plasma accelerators using permanent magnetic quadrupoles

    NASA Astrophysics Data System (ADS)

    Li, F.; Wu, Y. P.; Nie, Z.; Guo, B.; Zhang, X. H.; Huang, S.; Zhang, J.; Cheng, Z.; Ma, Y.; Fang, Y.; Zhang, C. J.; Wan, Y.; Xu, X. L.; Hua, J. F.; Pai, C. H.; Lu, W.; Gu, Y. Q.

    2018-01-01

    Low emittance (sub-100 nm rad) measurement of electron beams in plasma accelerators has been a challenging issue for a while. Among various measurement schemes, measurements based on single-shot quad-scan using permanent magnetic quadrupoles (PMQs) has been recently reported with emittance as low as ˜200 nm Weingartner (2012 Phys. Rev. Spec. Top. Accel. Beams 15 111302). However, the accuracy and reliability of this method have not been systematically analyzed. Such analysis is critical for evaluating the potential of sub-100 nm rad emittance measurement using any scheme. In this paper, we analyze the effects of various nonideal physical factors on the accuracy and reliability using the PMQ method. These factors include aberration induced by a high order field, PMQ misalignment and angular fluctuation of incoming beams. Our conclusions are as follows: (i) the aberrations caused by high order fields of PMQs are relatively weak for low emittance measurement as long as the PMQs are properly constructed. A series of PMQs were manufactured and measured at Tsinghua University, and using numerical simulations their high order field effects were found to be negligible . (ii) The largest measurement error of emittance is caused by the angular misalignment between PMQs. For low emittance measurement of ˜100 MeV beams, an angular alignment accuracy of 0.1° is necessary. This requirement can be eased for beams with higher energies. (iii) The transverse position misalignment of PMQs and angular fluctuation of incoming beams only cause a translational and rotational shift of measured signals, respectively, therefore, there is no effect on the measured value of emittance. (iv) The spatial resolution and efficiency of the detection system need to be properly designed to guarantee the accuracy of sub-100 nm rad emittance measurement.

  11. Boundary Avoidance Tracking for Instigating Pilot Induced Oscillations

    NASA Technical Reports Server (NTRS)

    Craun, Robert W.; Acosta, Diana M.; Beard, Steven D.; Hardy, Gordon H.; Leonard, Michael W.; Weinstein, Michael

    2013-01-01

    In order to advance research in the area of pilot induced oscillations, a reliable method to create PIOs in a simulated environment is necessary. Using a boundary avoidance tracking task, researchers performing an evaluation of control systems were able to create PIO events in 42% of cases using a nominal aircraft, and 91% of cases using an aircraft with reduced actuator rate limits. The simulator evaluation took place in the NASA Ames Vertical Motion Simulator, a high-fidelity motion-based simulation facility.

  12. Recent advances in phase shifted time averaging and stroboscopic interferometry

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  13. Double layer drainage performance of porous asphalt pavement

    NASA Astrophysics Data System (ADS)

    Ji, Yangyang; Xie, Jianguang; Liu, Mingxi

    2018-06-01

    In order to improve the design reliability of the double layer porous asphalt pavement, the 3D seepage finite element method was used to study the drainage capacity of double layer PAC pavements with different geometric parameters. It revealed that the effect of pavement drainage length, slope, permeability coefficient and structure design on the drainage capacity. The research of this paper can provide reference for the design of double layer porous asphalt pavement in different rainfall intensity areas, and provide guides for the related engineering design.

  14. Homotopy perturbation method: a versatile tool to evaluate linear and nonlinear fuzzy Volterra integral equations of the second kind.

    PubMed

    Narayanamoorthy, S; Sathiyapriya, S P

    2016-01-01

    In this article, we focus on linear and nonlinear fuzzy Volterra integral equations of the second kind and we propose a numerical scheme using homotopy perturbation method (HPM) to obtain fuzzy approximate solutions to them. To facilitate the benefits of this proposal, an algorithmic form of the HPM is also designed to handle the same. In order to illustrate the potentiality of the approach, two test problems are offered and the obtained numerical results are compared with the existing exact solutions and are depicted in terms of plots to reveal its precision and reliability.

  15. Text mining by Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Jamaati, Maryam; Mehri, Ali

    2018-01-01

    Long-range correlations between the elements of natural languages enable them to convey very complex information. Complex structure of human language, as a manifestation of natural languages, motivates us to apply nonextensive statistical mechanics in text mining. Tsallis entropy appropriately ranks the terms' relevance to document subject, taking advantage of their spatial correlation length. We apply this statistical concept as a new powerful word ranking metric in order to extract keywords of a single document. We carry out an experimental evaluation, which shows capability of the presented method in keyword extraction. We find that, Tsallis entropy has reliable word ranking performance, at the same level of the best previous ranking methods.

  16. Frequency control of wind turbine in power system

    NASA Astrophysics Data System (ADS)

    Xu, Huawei

    2018-06-01

    In order to improve the stability of the overall frequency of the power system, automatic power generation control and secondary frequency adjustment were applied. Automatic power generation control was introduced into power generation planning. A dual-fed wind generator power regulation model suitable for secondary frequency regulation was established. The results showed that this method satisfied the basic requirements of frequency regulation control of large-scale wind power access power systems and improved the stability and reliability of power system operation. Therefore, this system frequency control method and strategy is relatively simple. The effect is significant. The system frequency can quickly reach a steady state. It is worth applying and promoting.

  17. Tool and Fixture Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Mark W.

    2015-07-28

    In a manufacturing process, a need is identified and a product is created to fill this need. While design and engineering of the final product is important, the tools and fixtures that aid in the creation of the final product are just as important, if not more so. Power supplies assembled at the TA-55 PF-5 have been designed by an excellent engineering team. The task in PF-5 now is to ensure that all steps of the assembly and manufacturing process can be completed safely, reliably, and in a quality repeatable manner. One of these process steps involves soldering fine wiresmore » to an electrical connector. During the process development phase, the method of soldering included placing the power supply in a vice in order to manipulate it into a position conducive to soldering. This method is unacceptable from a reliability, repeatability, and ergonomic standpoint. To combat these issues, a fixture was designed to replace the current method. To do so, a twelve step engineering design process was used to create the fixture that would provide a solution to a multitude of problems, and increase the safety and efficiency of production.« less

  18. Retention-error patterns in complex alphanumeric serial-recall tasks.

    PubMed

    Mathy, Fabien; Varré, Jean-Stéphane

    2013-01-01

    We propose a new method based on an algorithm usually dedicated to DNA sequence alignment in order to both reliably score short-term memory performance on immediate serial-recall tasks and analyse retention-error patterns. There can be considerable confusion on how performance on immediate serial list recall tasks is scored, especially when the to-be-remembered items are sampled with replacement. We discuss the utility of sequence-alignment algorithms to compare the stimuli to the participants' responses. The idea is that deletion, substitution, translocation, and insertion errors, which are typical in DNA, are also typical putative errors in short-term memory (respectively omission, confusion, permutation, and intrusion errors). We analyse four data sets in which alphanumeric lists included a few (or many) repetitions. After examining the method on two simple data sets, we show that sequence alignment offers 1) a compelling method for measuring capacity in terms of chunks when many regularities are introduced in the material (third data set) and 2) a reliable estimator of individual differences in short-term memory capacity. This study illustrates the difficulty of arriving at a good measure of short-term memory performance, and also attempts to characterise the primary factors underpinning remembering and forgetting.

  19. What is on your mind? Using the perceptual cycle model and critical decision method to understand the decision-making process in the cockpit.

    PubMed

    Plant, Katherine L; Stanton, Neville A

    2013-01-01

    Aeronautical decision-making is complex as there is not always a clear coupling between the decision made and decision outcome. As such, there is a call for process-orientated decision research in order to understand why a decision made sense at the time it was made. Schema theory explains how we interact with the world using stored mental representations and forms an integral part of the perceptual cycle model (PCM); proposed here as a way to understand the decision-making process. This paper qualitatively analyses data from the critical decision method (CDM) based on the principles of the PCM. It is demonstrated that the approach can be used to understand a decision-making process and highlights how influential schemata can be at informing decision-making. The reliability of this approach is established, the general applicability is discussed and directions for future work are considered. This paper introduces the PCM, and the associated schema theory, as a framework to structure and explain data collected from the CDM. The reliability of both the method and coding scheme is addressed.

  20. Efficient purification and concentration of viruses from a large body of high turbidity seawater.

    PubMed

    Sun, Guowei; Xiao, Jinzhou; Wang, Hongming; Gong, Chaowen; Pan, Yingjie; Yan, Shuling; Wang, Yongjie

    2014-01-01

    Marine viruses are the most abundant entities in the ocean and play crucial roles in the marine ecological system. However, understanding of viral diversity on large scale depends on efficient and reliable viral purification and concentration techniques. Here, we report on developing an efficient method to purify and concentrate viruses from large body of high turbidity seawater. The developed method characterizes with high viral recovery efficiency, high concentration factor, high viral particle densities and high-throughput, and is reliable for viral concentration from high turbidity seawater. Recovered viral particles were used directly for subsequent analysis by epifluorescence microscopy, transmission electron microscopy and metagenomic sequencing. Three points are essential for this method:•The sampled seawater (>150 L) was initially divided into two parts, water fraction and settled matter fraction, after natural sedimentation.•Both viruses in the water fraction concentrated by tangential flow filtration (TFF) and viruses isolated from the settled matter fraction were considered as the whole viral community in high turbidity seawater.•The viral concentrates were re-concentrated by using centrifugal filter device in order to obtain high density of viral particles.

  1. Monte Carlo simulations on marker grouping and ordering.

    PubMed

    Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C

    2003-08-01

    Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.

  2. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  3. Transformation From a Conventional Clinical Microbiology Laboratory to Full Automation.

    PubMed

    Moreno-Camacho, José L; Calva-Espinosa, Diana Y; Leal-Leyva, Yoseli Y; Elizalde-Olivas, Dolores C; Campos-Romero, Abraham; Alcántar-Fernández, Jonathan

    2017-12-22

    To validate the performance, reproducibility, and reliability of BD automated instruments in order to establish a fully automated clinical microbiology laboratory. We used control strains and clinical samples to assess the accuracy, reproducibility, and reliability of the BD Kiestra WCA, the BD Phoenix, and BD Bruker MALDI-Biotyper instruments and compared them to previously established conventional methods. The following processes were evaluated: sample inoculation and spreading, colony counts, sorting of cultures, antibiotic susceptibility test, and microbial identification. The BD Kiestra recovered single colonies in less time than conventional methods (e.g. E. coli, 7h vs 10h, respectively) and agreement between both methodologies was excellent for colony counts (κ=0.824) and sorting cultures (κ=0.821). Antibiotic susceptibility tests performed with BD Phoenix and disk diffusion demonstrated 96.3% agreement with both methods. Finally, we compared microbial identification in BD Phoenix and Bruker MALDI-Biotyper and observed perfect agreement (κ=1) and identification at a species level for control strains. Together these instruments allow us to process clinical urine samples in 36h (effective time). The BD automated technologies have improved performance compared with conventional methods, and are suitable for its implementation in very busy microbiology laboratories. © American Society for Clinical Pathology 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Multi-objective Decision Based Available Transfer Capability in Deregulated Power System Using Heuristic Approaches

    NASA Astrophysics Data System (ADS)

    Pasam, Gopi Krishna; Manohar, T. Gowri

    2016-09-01

    Determination of available transfer capability (ATC) requires the use of experience, intuition and exact judgment in order to meet several significant aspects in the deregulated environment. Based on these points, this paper proposes two heuristic approaches to compute ATC. The first proposed heuristic algorithm integrates the five methods known as continuation repeated power flow, repeated optimal power flow, radial basis function neural network, back propagation neural network and adaptive neuro fuzzy inference system to obtain ATC. The second proposed heuristic model is used to obtain multiple ATC values. Out of these, a specific ATC value will be selected based on a number of social, economic, deregulated environmental constraints and related to specific applications like optimization, on-line monitoring, and ATC forecasting known as multi-objective decision based optimal ATC. The validity of results obtained through these proposed methods are scrupulously verified on various buses of the IEEE 24-bus reliable test system. The results presented and derived conclusions in this paper are very useful for planning, operation, maintaining of reliable power in any power system and its monitoring in an on-line environment of deregulated power system. In this way, the proposed heuristic methods would contribute the best possible approach to assess multiple objective ATC using integrated methods.

  5. Data driven CAN node reliability assessment for manufacturing system

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Yuan, Yong; Lei, Yong

    2017-01-01

    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  6. RELIABILITY AND VALIDITY OF A MODIFIED ISOMETRIC DYNAMOMETER IN THE ASSESSMENT OF MUSCULAR PERFORMANCE IN INDIVIDUALS WITH ANTERIOR CRUCIATE LIGAMENT RECONSTRUCTION

    PubMed Central

    de Vasconcelos, Rodrigo Antunes; Bevilaqua-Grossi, Débora; Shimano, Antonio Carlos; Paccola, Cleber Jansen; Salvini, Tânia Fátima; Prado, Christiane Lanatovits; Junior, Wilson A. Mello

    2015-01-01

    Objectives: The aim of this study was to evaluate the reliability and validity of a modified isometric dynamometer (MID) in performance deficits of the knee extensor and flexor muscles in normal individuals and in those with ACL reconstructions. Methods: Sixty male subjects were invited to participate of the study, being divided into three groups with 20 subjects each: control group (GC), group of individuals with ACL reconstruction with patellar tendon graft (GTP, and group of individuals with ACL reconstruction with hamstrings graft (GTF). All individuals performed isometric tests in the MID, muscular strength deficits collected were subsequently compared to the tests performed on the Biodex System 3 operating in the isometric and isokinetic mode at speeds of 60°/s and 180o/s. Intraclass ICC correlation calculations were done in order to assess MID reliability, specificity, sensitivity and Kappa's consistency coefficient calculations, respectively, for assessing the MID's validity in detecting muscular deficits and intra- and intergroup comparisons when performing the four strength tests using the ANOVA method. Results: The modified isometric dynamometer (MID) showed excellent reliability and good validity in the assessment of the performance of the knee extensor and flexor muscles groups. In the comparison between groups, the GTP showed significantly greater deficits as compared to the GTF and GC groups. Conclusion: Isometric dynamometers connected to mechanotherapy equipments could be an alternative option to collect data concerning performance deficits of the extensor and flexor muscles groups of the knee in subjects with ACL reconstruction. PMID:27004175

  7. Reliability of Wind Speed Data from Satellite Altimeter to Support Wind Turbine Energy

    NASA Astrophysics Data System (ADS)

    Uti, M. N.; Din, A. H. M.; Omar, A. H.

    2017-10-01

    Satellite altimeter has proven itself to be one of the important tool to provide good quality information in oceanographic study. Nowadays, most countries in the world have begun in implementation the wind energy as one of their renewable energy for electric power generation. Many wind speed studies conducted in Malaysia using conventional method and scientific technique such as anemometer and volunteer observing ships (VOS) in order to obtain the wind speed data to support the development of renewable energy. However, there are some limitations regarding to this conventional method such as less coverage for both spatial and temporal and less continuity in data sharing by VOS members. Thus, the aim of this research is to determine the reliability of wind speed data by using multi-mission satellite altimeter to support wind energy potential in Malaysia seas. Therefore, the wind speed data are derived from nine types of satellite altimeter starting from year 1993 until 2016. Then, to validate the reliability of wind speed data from satellite altimeter, a comparison of wind speed data form ground-truth buoy that located at Sabah and Sarawak is conducted. The validation is carried out in terms of the correlation, the root mean square error (RMSE) calculation and satellite track analysis. As a result, both techniques showing a good correlation with value positive 0.7976 and 0.6148 for point located at Sabah and Sarawak Sea, respectively. It can be concluded that a step towards the reliability of wind speed data by using multi-mission satellite altimeter can be achieved to support renewable energy.

  8. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  9. Strategy for reliable strain measurement in InAs/GaAs materials from high-resolution Z-contrast STEM images

    NASA Astrophysics Data System (ADS)

    Vatanparast, Maryam; Vullum, Per Erik; Nord, Magnus; Zuo, Jian-Min; Reenaas, Turid W.; Holmestad, Randi

    2017-09-01

    Geometric phase analysis (GPA), a fast and simple Fourier space method for strain analysis, can give useful information on accumulated strain and defect propagation in multiple layers of semiconductors, including quantum dot materials. In this work, GPA has been applied to high resolution Z-contrast scanning transmission electron microscopy (STEM) images. Strain maps determined from different g vectors of these images are compared to each other, in order to analyze and assess the GPA technique in terms of accuracy. The SmartAlign tool has been used to improve the STEM image quality getting more reliable results. Strain maps from template matching as a real space approach are compared with strain maps from GPA, and it is discussed that a real space analysis is a better approach than GPA for aberration corrected STEM images.

  10. A SINS/SRS/GNS Autonomous Integrated Navigation System Based on Spectral Redshift Velocity Measurements

    PubMed Central

    Wei, Wenhui; Gao, Zhaohui; Gao, Shesheng; Jia, Ke

    2018-01-01

    In order to meet the requirements of autonomy and reliability for the navigation system, combined with the method of measuring speed by using the spectral redshift information of the natural celestial bodies, a new scheme, consisting of Strapdown Inertial Navigation System (SINS)/Spectral Redshift (SRS)/Geomagnetic Navigation System (GNS), is designed for autonomous integrated navigation systems. The principle of this SINS/SRS/GNS autonomous integrated navigation system is explored, and the corresponding mathematical model is established. Furthermore, a robust adaptive central difference particle filtering algorithm is proposed for this autonomous integrated navigation system. The simulation experiments are conducted and the results show that the designed SINS/SRS/GNS autonomous integrated navigation system possesses good autonomy, strong robustness and high reliability, thus providing a new solution for autonomous navigation technology. PMID:29642549

  11. Using qualitative maps to direct reactive robots

    NASA Technical Reports Server (NTRS)

    Bertin, Randolph; Pendleton, Tom

    1992-01-01

    The principal advantage of mobile robots is that they are able to go to specific locations to perform useful tasks rather than have the tasks brought to them. It is important therefore that the robot be used to reach desired locations efficiently and reliably. A mobile robot whose environment extends significantly beyond its sensory horizon must maintain a representation of the environment, a map, in order to attain these efficiency and reliability requirements. We believe that qualitative mapping methods provide useful and robust representation schemes and that such maps may be used to direct the actions of a reactively controlled robot. In this paper we describe our experience in employing qualitative maps to direct, through the selection of desired control strategies, a reactive-behavior based robot. This mapping capability represents the development of one aspect of a successful deliberative/reactive hybrid control architecture.

  12. Exact Critical Exponents for the Antiferromagnetic Quantum Critical Metal in Two Dimensions

    NASA Astrophysics Data System (ADS)

    Schlief, Andres; Lunts, Peter; Lee, Sung-Sik

    2017-04-01

    Unconventional metallic states which do not support well-defined single-particle excitations can arise near quantum phase transitions as strong quantum fluctuations of incipient order parameters prevent electrons from forming coherent quasiparticles. Although antiferromagnetic phase transitions occur commonly in correlated metals, understanding the nature of the strange metal realized at the critical point in layered systems has been hampered by a lack of reliable theoretical methods that take into account strong quantum fluctuations. We present a nonperturbative solution to the low-energy theory for the antiferromagnetic quantum critical metal in two spatial dimensions. Being a strongly coupled theory, it can still be solved reliably in the low-energy limit as quantum fluctuations are organized by a new control parameter that emerges dynamically. We predict the exact critical exponents that govern the universal scaling of physical observables at low temperatures.

  13. A Decentralized Compositional Framework for Dependable Decision Process in Self-Managed Cyber Physical Systems

    PubMed Central

    Hou, Kun-Mean; Zhang, Zhan

    2017-01-01

    Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem. PMID:29120357

  14. A Decentralized Compositional Framework for Dependable Decision Process in Self-Managed Cyber Physical Systems.

    PubMed

    Zhou, Peng; Zuo, Decheng; Hou, Kun-Mean; Zhang, Zhan

    2017-11-09

    Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem.

  15. Overall Key Performance Indicator to Optimizing Operation of High-Pressure Homogenizers for a Reliable Quantification of Intracellular Components in Pichia pastoris.

    PubMed

    Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco

    2015-01-01

    The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest.

  16. Overall Key Performance Indicator to Optimizing Operation of High-Pressure Homogenizers for a Reliable Quantification of Intracellular Components in Pichia pastoris

    PubMed Central

    Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco

    2015-01-01

    The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest. PMID:26284241

  17. Research on the phase adjustment method for dispersion interferometer on HL-2A tokamak

    NASA Astrophysics Data System (ADS)

    Tongyu, WU; Wei, ZHANG; Haoxi, WANG; Yan, ZHOU; Zejie, YIN

    2018-06-01

    A synchronous demodulation system is proposed and deployed for CO2 dispersion interferometer on HL-2A, which aims at high plasma density measurements and real-time feedback control. In order to make sure that the demodulator and the interferometer signal are synchronous in phase, a phase adjustment (PA) method has been developed for the demodulation system. The method takes advantages of the field programmable gate array parallel and pipeline process capabilities to carry out high performance and low latency PA. Some experimental results presented show that the PA method is crucial to the synchronous demodulation system and reliable to follow the fast change of the electron density. The system can measure the line-integrated density with a high precision of 2.0 × 1018 m‑2.

  18. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  19. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    NASA Astrophysics Data System (ADS)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  20. Reliability and acceptability of six station multiple mini-interviews: past-behavioural versus situational questions in postgraduate medical admission.

    PubMed

    Yamada, Toru; Sato, Juichi; Yoshimura, Hiroshi; Okubo, Tomoya; Hiraoka, Eiji; Shiga, Takashi; Kubota, Tadao; Fujitani, Shigeki; Machi, Junji; Ban, Nobutaro

    2017-03-16

    The multiple mini-interview (MMI) is increasingly used for postgraduate medical admissions and in undergraduate settings. MMIs use mostly Situational Questions (SQs) rather than Past-Behavioural Questions (PBQs). A previous study of MMIs in this setting, where PBQs and SQs were asked in the same order, reported that the reliability of PBQs was non-inferior to SQs and that SQs were more acceptable to candidates. The order in which the questions are asked may affect reliability and acceptability of an MMI. This study investigated the reliability of an MMI using both PBQs and SQs, minimising question order bias. Acceptability of PBQs and SQs was also assessed. Forty candidates applying for a postgraduate medical admission for 2016-2017 were included; 24 examiners were used. The MMI consisted of six stations with one examiner per station; a PBQ and a SQ were asked at every station, and the order of questions was alternated between stations. Reliability was analysed for scores obtained for PBQs or SQs separately, and for both questions. A post-MMI survey was used to assess the acceptability of PBQs and SQs. The generalisability (G) coefficients for PBQs only, SQs only, and both questions were 0.87, 0.96, and 0.80, respectively. Decision studies suggested that a four-station MMI would also be sufficiently reliable (G-coefficients 0.82 and 0.94 for PBQs and SQs, respectively). In total, 83% of participants were satisfied with the MMI. In terms of face validity, PBQs were more acceptable than SQs for candidates (p = 0.01), but equally acceptable for examiners (88% vs. 83% positive responses for PBQs vs. SQs; p = 0.377). Candidates preferred PBQs to SQs when asked to choose one, though this difference was not significant (p = 0.081); examiners showed a clear preference for PBQs (p = 0.007). Reliability and acceptability of six-station MMI were good among 40 postgraduate candidates; modelling suggested that four stations would also be reliable. SQs were more reliable than PBQs. Candidates found PBQs more acceptable than SQs and examiners preferred PBQs when they had to choose between the two. Our findings suggest that it is better to ask both PBQs and SQs during an MMI to maximise acceptability.

  1. Reliability apportionment approach for spacecraft solar array using fuzzy reasoning Petri net and fuzzy comprehensive evaluation

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng

    2012-07-01

    The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.

  2. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  3. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  4. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  5. An Excel‐based implementation of the spectral method of action potential alternans analysis

    PubMed Central

    Pearman, Charles M.

    2014-01-01

    Abstract Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro‐arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T‐wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. PMID:25501439

  6. The Revised Neurobehavioral Severity Scale (NSS-R) for Rodents.

    PubMed

    Yarnell, Angela M; Barry, Erin S; Mountney, Andrea; Shear, Deborah; Tortella, Frank; Grunberg, Neil E

    2016-04-08

    Motor and sensory deficits are common following traumatic brain injury (TBI). Although rodent models provide valuable insight into the biological and functional outcomes of TBI, the success of translational research is critically dependent upon proper selection of sensitive, reliable, and reproducible assessments. Published literature includes various observational scales designed to evaluate post-injury functionality; however, the heterogeneity in TBI location, severity, and symptomology can complicate behavioral assessments. The importance of choosing behavioral outcomes that can be reliably and objectively quantified in an efficient manner is becoming increasingly important. The Revised Neurobehavioral Severity Scale (NSS-R) is a continuous series of specific, sensitive, and standardized observational tests that evaluate balance, motor coordination, and sensorimotor reflexes in rodents. The tasks follow a specific order designed to minimize interference: balance, landing, tail raise, dragging, righting reflex, ear reflex, eye reflex, sound reflex, tail pinch, and hindpaw pinch. The NSS-R has proven to be a reliable method differentiating brain-injured rodents from non-brain-injured rodents across many brain injury models. Copyright © 2016 John Wiley & Sons, Inc.

  7. Assessing transfer property and reliability of urban bus network based on complex network theory

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Zhuge, Cheng-Xiang; Zhao, Xiang; Song, Wen-Bo

    Transfer reliability has an important impact on the urban bus network. The proportion of zero and one transfer time is a key indicator to measure the connectivity of bus networks. However, it is hard to calculate the transfer time between nodes because of the complicated network structure. In this paper, the topological structures of urban bus network in Jinan are constructed by space L and space P. A method to calculate transfer times between stations has been proposed by reachable matrix under space P. The result shows that it is efficient to calculate the transfer time between nodes in large networks. In order to test the transfer reliability, a node failure process has been built according to degree, clustering coefficient and betweenness centrality under space L and space P. The results show that the deliberate attack by betweenness centrality under space P is more effective compared with other five attack modes. This research could provide a power tool to find hub stations in bus networks and give a help for traffic manager to guarantee the normal operation of urban bus systems.

  8. SNPhylo: a pipeline to construct a phylogenetic tree from huge SNP data.

    PubMed

    Lee, Tae-Ho; Guo, Hui; Wang, Xiyin; Kim, Changsoo; Paterson, Andrew H

    2014-02-26

    Phylogenetic trees are widely used for genetic and evolutionary studies in various organisms. Advanced sequencing technology has dramatically enriched data available for constructing phylogenetic trees based on single nucleotide polymorphisms (SNPs). However, massive SNP data makes it difficult to perform reliable analysis, and there has been no ready-to-use pipeline to generate phylogenetic trees from these data. We developed a new pipeline, SNPhylo, to construct phylogenetic trees based on large SNP datasets. The pipeline may enable users to construct a phylogenetic tree from three representative SNP data file formats. In addition, in order to increase reliability of a tree, the pipeline has steps such as removing low quality data and considering linkage disequilibrium. A maximum likelihood method for the inference of phylogeny is also adopted in generation of a tree in our pipeline. Using SNPhylo, users can easily produce a reliable phylogenetic tree from a large SNP data file. Thus, this pipeline can help a researcher focus more on interpretation of the results of analysis of voluminous data sets, rather than manipulations necessary to accomplish the analysis.

  9. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  10. Reliability as Argument

    ERIC Educational Resources Information Center

    Parkes, Jay

    2007-01-01

    Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

  11. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  12. 76 FR 23222 - Electric Reliability Organization Interpretation of Transmission Operations Reliability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ... applications or print-to-PDF format, and not in a scanned format, at http://www.ferc.gov/docs-filing/efiling....3d 1342 (DC Cir. 2009). \\5\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693... applications or print-to-PDF format and not in a scanned format. Commenters filing electronically do not need...

  13. 75 FR 52528 - Mandatory Reliability Standards for the Bulk-Power System; Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-26

    ... Frequency Response in the Wholesale Electric Grid on Thursday, September 23, from 10 a.m. to approximately 4... Order No. 693 the Commission approved Reliability Standard BAL-003-0 as mandatory and enforceable and directed the ERO to develop a modification to BAL-003-0 through the Reliability Standards development...

  14. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  15. Reliability of new software in measuring cervical multifidus diameters and shoulder muscle strength in a synchronized way; an ultrasonographic study

    PubMed Central

    Rahnama, Leila; Rezasoltani, Asghar; Khalkhali-Zavieh, Minoo; Rahnama, Behnam; Noori-Kochi, Farhang

    2015-01-01

    OBJECTIVES: This study was conducted with the purpose of evaluating the inter-session reliability of new software to measure the diameters of the cervical multifidus muscle (CMM), both at rest and during isometric contractions of the shoulder abductors in subjects with neck pain and in healthy individuals. METHOD: In the present study, the reliability of measuring the diameters of the CMM with the Sonosynch software was evaluated by using 24 participants, including 12 subjects with chronic neck pain and 12 healthy individuals. The anterior-posterior diameter (APD) and the lateral diameter (LD) of the CMM were measured in a resting state and then repeated during isometric contraction of the shoulder abductors. Measurements were taken on separate occasions 3 to 7 days apart in order to determine inter-session reliability. Intraclass correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable difference (SDD) were used to evaluate the relative and absolute reliability, respectively. RESULTS: The Sonosynch software has shown to be highly reliable in measuring the diameters of the CMM both in healthy subjects and in those with neck pain. The ICCs 95% CI for APD ranged from 0.84 to 0.94 in subjects with neck pain and from 0.86 to 0.94 in healthy subjects. For LD, the ICC 95% CI ranged from 0.64 to 0.95 in subjects with neck pain and from 0.82 to 0.92 in healthy subjects. CONCLUSIONS: Ultrasonographic measurement of the diameters of the CMM using Sonosynch has proved to be reliable especially for APD in healthy subjects as well as subjects with neck pain. PMID:26443975

  16. Translation, cross-cultural adaptation to Brazilian- Portuguese and reliability analysis of the instrument Rapid Entire Body Assessment-REBA

    PubMed Central

    Lamarão, Andressa M.; Costa, Lucíola C. M.; Comper, Maria L. C.; Padula, Rosimeire S.

    2014-01-01

    Background: Observational instruments, such as the Rapid Entire Body Assessment, quickly assess biomechanical risks present in the workplace. However, in order to use these instruments, it is necessary to conduct the translational/cross-cultural adaptation of the instrument and test its measurement properties. Objectives: To perform the translation and the cross-cultural adaptation to Brazilian-Portuguese and test the reliability of the REBA instrument. Method: The procedures of translation and cross-cultural adaptation to Brazilian-Portuguese were conducted following proposed guidelines that involved translation, synthesis of translations, back translation, committee review and testing of the pre-final version. In addition, reliability and the intra- and inter-rater percent agreement were obtained with the Linear Weighted Kappa Coefficient that was associated with the 95% Confidence Interval and the cross tabulation 2×2. Results : The procedures for translation and adaptation were adequate and the necessary adjustments were conducted on the instrument. The intra- and inter-rater reliability showed values of 0.104 to 0.504, respectively, ranging from very poor to moderate. The percentage agreement values ranged from 5.66% to 69.81%. The percentage agreement was closer to 100% at the item 'upper arm' (69.81%) for the Intra-rater 1 and at the items 'legs' and 'upper arm' for the Intra-rater 2 (62.26%). Conclusions: The processes of translation and cross-cultural adaptation were conducted on the REBA instrument and the Brazilian version of the instrument was obtained. However, despite the reliability of the tests used to correct the translated and adapted version, the reliability values are unacceptable according to the guidelines standard, indicating that the reliability must be re-evaluated. Therefore, caution in the interpretation of the biomechanical risks measured by this instrument should be taken. PMID:25003273

  17. Reliability of a quantitative clinical posture assessment tool among persons with idiopathic scoliosis.

    PubMed

    Fortin, Carole; Feldman, Debbie Ehrmann; Cheriet, Farida; Gravel, Denis; Gauthier, Frédérique; Labelle, Hubert

    2012-03-01

    To determine overall, test-retest and inter-rater reliability of posture indices among persons with idiopathic scoliosis. A reliability study using two raters and two test sessions. Tertiary care paediatric centre. Seventy participants aged between 10 and 20 years with different types of idiopathic scoliosis (Cobb angle 15 to 60°) were recruited from the scoliosis clinic. Based on the XY co-ordinates of natural reference points (e.g., eyes) as well as markers placed on several anatomical landmarks, 32 angular and linear posture indices taken from digital photographs in the standing position were calculated from a specially developed software program. Generalisability theory served to estimate the reliability and standard error of measurement (SEM) for the overall, test-retest and inter-rater designs. Bland and Altman's method was also used to document agreement between sessions and raters. In the random design, dependability coefficients demonstrated a moderate level of reliability for six posture indices (ϕ=0.51 to 0.72) and a good level of reliability for 26 posture indices out of 32 (ϕ≥0.79). Error attributable to marker placement was negligible for most indices. Limits of agreement and SEM values were larger for shoulder protraction, trunk list, Q angle, cervical lordosis and scoliosis angles. The most reproducible indices were waist angles and knee valgus and varus. Posture can be assessed in a global fashion from photographs in persons with idiopathic scoliosis. Despite the good reliability of marker placement, other studies are needed to minimise measurement errors in order to provide a suitable tool for monitoring change in posture over time. Copyright © 2011 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  18. Reliability and limits of agreement of circumferential, water displacement, and optoelectronic volumetry in the measurement of upper limb lymphedema.

    PubMed

    Deltombe, T; Jamart, J; Recloux, S; Legrand, C; Vandenbroeck, N; Theys, S; Hanson, P

    2007-03-01

    We conducted a reliability comparison study to determine the intrarater and inter-rater reliability and the limits of agreement of the volume estimated by circumferential measurements using the frustum sign method and the disk model method, by water displacement volumetry, and by infrared optoelectronic volumetry in the assessment of upper limb lymphedema. Thirty women with lymphedema following axillary lymph node dissection surgery for breast cancer surgery were enrolled. In each patient, the volumes of the upper limbs were estimated by three physical therapists using circumference measurements, water displacement and optoelectronic volumetry. One of the physical therapists performed each measure twice. Intraclass correlation coefficients (ICCs), relative differences, and limits of agreement were determined. Intrarater and interrater reliability ICCs ranged from 0.94 to 1. Intrarater relative differences were 1.9% for the disk model method, 3.2% for the frustum sign model method, 2.9% for water displacement volumetry, and 1.5% for optoelectronic volumetry. Intrarater reliability was always better than interrater, except for the optoelectronic method. Intrarater and interrater limits of agreement were calculated for each technique. The disk model method and optoelectronic volumetry had better reliability than the frustum sign method and water displacement volumetry, which is usually considered to be the gold standard. In terms of low-cost, simplicity, and reliability, we recommend the disk model method as the method of choice in clinical practice. Since intrarater reliability was always better than interrater reliability (except for optoelectronic volumetry), patients should therefore, ideally, always be evaluated by the same therapist. Additionally, the limits of agreement must be taken into account when determining the response of a patient to treatment.

  19. Interpreting the cross-sectional flow field in a river bank based on a genetic-algorithm two-dimensional heat-transport method (GA-VS2DH)

    NASA Astrophysics Data System (ADS)

    Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui

    2016-12-01

    Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.

  20. Application of positron emission tomography to determine cerebral glucose utilization in conscious infant monkeys.

    PubMed

    Moore, A H; Cherry, S R; Pollack, D B; Hovda, D A; Phelps, M E

    1999-05-01

    Cerebral glucose metabolism has been used as a marker of cerebral maturation and neuroplasticity. In studies addressing these issues in young non-human primates, investigators have used positron emission tomography (PET) and [18F]2-fluoro-2-deoxy-D-glucose (FDG) to calculate local cerebral metabolic rates of glucose (1CMRG1c). Unfortunately, these values were influenced by anesthesia. In order to avoid this confounding factor, we have established a method that permits reliable measurements in young conscious vervet monkeys using FDG-PET. Immature animals remained in a conscious, resting state during the initial 42 min of FDG uptake as they were allowed to cling to their anesthetized mothers. After FDG uptake, animals were anesthetized and placed in the PET scanner with data acquisition beginning at 60 min post-FDG injection. FDG image sets consisted of 30 planes separated by 1.69 mm, parameters sufficient to image the entire monkey brain. Our method of region-of-interest (ROI) analysis was assessed within and between raters and demonstrated high reliability (P < 0.001). To illustrate that our method was sensitive to developmental changes in cerebral glucose metabolism, quantitative studies of young conscious monkeys revealed that infant monkeys 6-8 months of age exhibited significantly higher 1CMRG1c values (P < 0.05) in all regions examined, except sensorimotor cortex and thalamus, compared to monkeys younger than 4 months of age. This method provided high resolution images and 1CMRG1c values that were reliable within age group. These results support the application of FDG-PET to investigate questions related to cerebral glucose metabolism in young conscious non-human primates.

Top