Universal first-order reliability concept applied to semistatic structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Universal first-order reliability concept applied to semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-07-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Illustrated structural application of universal first-order reliability method
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.
On Some Methods in Safety Evaluation in Geotechnics
NASA Astrophysics Data System (ADS)
Puła, Wojciech; Zaskórski, Łukasz
2015-06-01
The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
Mechanical system reliability for long life space systems
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1994-01-01
The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
NASA Astrophysics Data System (ADS)
Yu, Bo; Ning, Chao-lie; Li, Bing
2017-03-01
A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
NASA Astrophysics Data System (ADS)
Gao, X.; Yan, E. C.; Yeh, T. C. J.; Wang, Y.; Liang, Y.; Hao, Y.
2017-12-01
Notice that most of the underground liquefied petroleum gas (LPG) storage caverns are constructed in unlined rock caverns (URCs), where the variability of hydraulic properties (in particular, hydraulic conductivity) has significant impacts on hydrologic containment performance. However, it is practically impossible to characterize the spatial distribution of these properties in detail at the site of URCs. This dilemma forces us to cope with uncertainty in our evaluations of gas containment. As a consequence, the uncertainty-based analysis is deemed more appropriate than the traditional deterministic analysis. The objectives of this paper are 1) to introduce a numerical first order method to calculate the gas containment reliability within a heterogeneous, two-dimensional unlined rock caverns, and 2) to suggest a strategy for improving the gas containment reliability. In order to achieve these goals, we first introduced the stochastic continuum representation of saturated hydraulic conductivity (Ks) of fractured rock and analyzed the spatial variability of Ks at a field site. We then conducted deterministic simulations to demonstrate the importance of heterogeneity of Ks in the analysis of gas tightness performance of URCs. Considering the uncertainty of the heterogeneity in the real world situations, we subsequently developed a numerical first order method (NFOM) to determine the gas tightness reliability at crucial locations of URCs. Using the NFOM, the effect of spatial variability of Ks on gas tightness reliability was investigated. Results show that as variance or spatial structure anisotropy of Ks increases, most of the gas tightness reliability at crucial locations reduces. Meanwhile, we compare the results of NFOM with those of Monte Carlo simulation, and we find the accuracy of NFOM is mainly affected by the magnitude of the variance of Ks. At last, for improving gas containment reliability at crucial locations at this study site, we suggest that vertical water-curtain holes should be installed in the pillar rather than increasing density of horizontal water-curtain boreholes.
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
On the Local Convergence of Pattern Search
NASA Technical Reports Server (NTRS)
Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.
Hall, William J
2016-11-01
This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school.
Hall, William J.
2017-01-01
This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school. PMID:28194041
Li, Haibin; He, Yun; Nie, Xiaobo
2018-01-01
Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
Experience in estimating neutron poison worths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R.T.; Congdon, S.P.
1989-01-01
Gadolinia, {sup 135}Xe, {sup 149}Sm, control rod, and soluble boron are five neutron poisons that may appear in light water reactor assemblies. Reliable neutron poison worth estimation is useful for evaluating core operating strategies, fuel cycle economics, and reactor safety design. Based on physical presence, neutron poisons can be divided into two categories: local poisons and global poisons. Gadolinia and control rod are local poisons, and {sup 135}Xe, {sup 149}Sm, and soluble boron are global poisons. The first-order perturbation method is commonly used to estimate nuclide worths in fuel assemblies. It is well known, however, that the first-order perturbation methodmore » was developed for small perturbations, such as the perturbation due to weak absorbers, and that neutron poisons are not weak absorbers. The authors have developed an improved method to replace the first-order perturbation method, which yields very poor results, for estimating local poison worths. It has also been shown that the first-order perturbation method seems adequate to estimate worths for global poisons caused by flux compensation.« less
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
NASA Astrophysics Data System (ADS)
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Direct nuclear reaction experiments for stellar nucleosynthesis
NASA Astrophysics Data System (ADS)
Cherubini, S.
2017-09-01
During the last two decades indirect methods where proposed and used in many experiments in order to measure nuclear cross sections between charged particles at stellar energies. These are among the lowest to be measured in nuclear physics. One of these methods, the Trojan Horse method, is based on the Quasi-Free reaction mechanism and has proved to be particularly flexible and reliable. It allowed for the measurement of the cross sections of various reactions of astrophysical interest using stable beams. The use and reliability of indirect methods become even more important when reactions induced by Radioactive Ion Beams are considered, given the much lower intensity generally available for these beams. The first Trojan Horse measurement of a process involving the use of a Radioactive Ion Beam dealt with the ^{18} F(p, α ^{15} O process in Nova conditions. To obtain pieces of information on this process, in particular about its cross section at Nova energies, the Trojan Horse method was applied to the ^{18} F(d, α ^{15} O)n three body reaction. In order to establish the reliability of the Trojan Horse method approach, the Treiman-Yang criterion is an important test and it will be addressed briefly in this paper.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding
NASA Astrophysics Data System (ADS)
Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin
We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
Development of a nanosatellite de-orbiting system by reliability based design optimization
NASA Astrophysics Data System (ADS)
Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem
2015-12-01
This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.
Reliability of Hull Girder Ultimate Strength of Steel Ships
NASA Astrophysics Data System (ADS)
Da-wei, Gao; Gui-jie, Shi
2018-03-01
Hull girder ultimate strength is an evaluation index reflecting the true safety margin or structural redundancy about container ships. Especially, after the hull girder fracture accident of the MOL COMFORT, the 8,000TEU class large container ship, on June 17 2013, larger container ship safety has been paid on much more attention. In this paper, different methods of calculating hull girder ultimate strength are firstly discussed and compared with. The bending ultimate strength can be analyzed by nonlinear finite element method (NFEM) and increment-iterative method, and also the shear ultimate strength can be analyzed by NFEM and simple equations. Then, the probability distribution of hull girder wave loads and still water loads of container ship are summarized. At last, the reliability of hull girder ultimate strength under bending moment and shear forces for three container ships is analyzed by using a first order method. The conclusions can be applied to give guidance for ship design and safety evaluation.
Fatigue Reliability of Gas Turbine Engine Structures
NASA Technical Reports Server (NTRS)
Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.
1997-01-01
The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Integrated Life-Cycle Framework for Maintenance, Monitoring and Reliability of Naval Ship Structures
2012-08-15
number of times, a fast and accurate method for analyzing the ship hull is required. In order to obtain this required computational speed and accuracy...Naval Engineers Fleet Maintenance & Modernization Symposium (FMMS 2011) [8] and the Eleventh International Conference on Fast Sea Transportation ( FAST ...probabilistic strength of the ship hull. First, a novel deterministic method for the fast and accurate calculation of the strength of the ship hull is
Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi
2017-11-05
We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Designing optimal universal pulses using second-order, large-scale, non-linear optimization
NASA Astrophysics Data System (ADS)
Anand, Christopher Kumar; Bain, Alex D.; Curtis, Andrew Thomas; Nie, Zhenghua
2012-06-01
Recently, RF pulse design using first-order and quasi-second-order pulses has been actively investigated. We present a full second-order design method capable of incorporating relaxation, inhomogeneity in B0 and B1. Our model is formulated as a generic optimization problem making it easy to incorporate diverse pulse sequence features. To tame the computational cost, we present a method of calculating second derivatives in at most a constant multiple of the first derivative calculation time, this is further accelerated by using symbolic solutions of the Bloch equations. We illustrate the relative merits and performance of quasi-Newton and full second-order optimization with a series of examples, showing that even a pulse already optimized using other methods can be visibly improved. To be useful in CPMG experiments, a universal refocusing pulse should be independent of the delay time and insensitive of the relaxation time and RF inhomogeneity. We design such a pulse and show that, using it, we can obtain reliable R2 measurements for offsets within ±γB1. Finally, we compare our optimal refocusing pulse with other published refocusing pulses by doing CPMG experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...
2016-01-06
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Lababpour, Abdolmajid; Lee, Choul-Gyun
2006-02-01
A first-order derivative spectrophotometric method has been developed for the simultaneous measurement of chlorophyll and astaxanthin concentrations in Haematococcus pluvialis cells. Acetone was selected for the extraction of pigments because of its good sensitivity and low toxicity compared with other organic solvents tested; the tested solvents included acetone, methanol, hexane, chloroform, n-propanol, and acetonitrile. A first-order derivative spectrophotometric method was used to eliminate the effects of the overlaping of the chlorophyll and astaxanthin peaks. The linear ranges in 1D evaluation were from 0.50 to 20.0 microg x ml(-1) for chlorophyll and from 1.00 to 12.0 microg x ml(-1) for astaxanthin. The limits of detection of the analytical procedure were found to be 0.35 microg x ml(-1) for chlorophyll and 0.25 microg x ml(-1) for astaxanthin. The relative standard deviations for the determination of 7.0 microg x ml(-1) chlorophyll and 5.0 microg x ml(-1) astaxanthin were 1.2% and 1.1%, respectively. The procedure was found to be simple, rapid, and reliable. This method was successfully applied to the determination of chlorophyll and astaxanthin concentrations in H. pluvialis cells. A good agreement was achieved between the results obtained by the proposed method and HPLC method.
Reliable fusion of control and sensing in intelligent machines. Thesis
NASA Technical Reports Server (NTRS)
Mcinroy, John E.
1991-01-01
Although robotics research has produced a wealth of sophisticated control and sensing algorithms, very little research has been aimed at reliably combining these control and sensing strategies so that a specific task can be executed. To improve the reliability of robotic systems, analytic techniques are developed for calculating the probability that a particular combination of control and sensing algorithms will satisfy the required specifications. The probability can then be used to assess the reliability of the design. An entropy formulation is first used to quickly eliminate designs not capable of meeting the specifications. Next, a framework for analyzing reliability based on the first order second moment methods of structural engineering is proposed. To ensure performance over an interval of time, lower bounds on the reliability of meeting a set of quadratic specifications with a Gaussian discrete time invariant control system are derived. A case study analyzing visual positioning in robotic system is considered. The reliability of meeting timing and positioning specifications in the presence of camera pixel truncation, forward and inverse kinematic errors, and Gaussian joint measurement noise is determined. This information is used to select a visual sensing strategy, a kinematic algorithm, and a discrete compensator capable of accomplishing the desired task. Simulation results using PUMA 560 kinematic and dynamic characteristics are presented.
Accurate Projection Methods for the Incompressible Navier–Stokes Equations
Brown, David L.; Cortez, Ricardo; Minion, Michael L.
2001-04-10
This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Verification of transport equations in a general purpose commercial CFD code.
NASA Astrophysics Data System (ADS)
Melot, Matthieu; Nennemann, Bernd; Deschênes, Claire
2016-11-01
In this paper, the Verification and Validation methodology is presented. This method aims to increase the reliability and the trust that can be placed into complex CFD simulations. The first step of this methodology, the code verification is presented in greater details. The CFD transport equations in steady state, transient and Arbitrary Eulerian Lagrangian (ALE, used for transient moving mesh) formulations in Ansys CFX are verified. It is shown that the expected spatial and temporal order of convergence are achieved for the steady state and the transient formulations. Unfortunately this is not completely the case for the ALE formulation. As for a lot of other commercial and in-house CFD codes, the temporal convergence of the velocity is limited to a first order where a second order would have been expected.
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis
Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan
2016-01-01
Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193
Finger-vein and fingerprint recognition based on a feature-level fusion method
NASA Astrophysics Data System (ADS)
Yang, Jinfeng; Hong, Bofeng
2013-07-01
Multimodal biometrics based on the finger identification is a hot topic in recent years. In this paper, a novel fingerprint-vein based biometric method is proposed to improve the reliability and accuracy of the finger recognition system. First, the second order steerable filters are used here to enhance and extract the minutiae features of the fingerprint (FP) and finger-vein (FV). Second, the texture features of fingerprint and finger-vein are extracted by a bank of Gabor filter. Third, a new triangle-region fusion method is proposed to integrate all the fingerprint and finger-vein features in feature-level. Thus, the fusion features contain both the finger texture-information and the minutiae triangular geometry structure. Finally, experimental results performed on the self-constructed finger-vein and fingerprint databases are shown that the proposed method is reliable and precise in personal identification.
2014-01-01
Background Premarital sexual behaviors are important issue for women’s health. The present study was designed to develop and examine the psychometric properties of a scale in order to identify young women who are at greater risk of premarital sexual behavior. Method This was an exploratory mixed method investigation. Indeed, the study was conducted in two phases. In the first phase, qualitative methods (focus group discussion and individual interview) were applied to generate items and develop the questionnaire. In the second phase, psychometric properties (validity and reliability) of the questionnaire were assessed. Results In the first phase an item pool containing 53 statements related to premarital sexual behavior was generated. In the second phase item reduction was applied and the final version of the questionnaire containing 26 items was developed. The psychometric properties of this final version were assessed and the results showed that the instrument has a good structure, and reliability. The results from exploratory factory analysis indicated a 5-factor solution for the instrument that jointly accounted for the 57.4% of variance observed. The Cronbach’s alpha coefficient for the instrument was found to be 0.87. Conclusion This study provided a valid and reliable scale to identify premarital sexual behavior in young women. Assessment of premarital sexual behavior might help to improve women’s sexual abstinence. PMID:24924696
Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Hilburger, Mark W.
2003-01-01
A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.
Development and construct validity of the Classroom Strategies Scale-Observer Form.
Reddy, Linda A; Fabiano, Gregory; Dudek, Christopher M; Hsu, Louis
2013-12-01
Research on progress monitoring has almost exclusively focused on student behavior and not on teacher practices. This article presents the development and validation of a new teacher observational assessment (Classroom Strategies Scale) of classroom instructional and behavioral management practices. The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. The Classroom Strategies Scale (CSS) evidenced overall good reliability estimates including internal consistency, interrater reliability, test-retest reliability, and freedom from item bias on important teacher demographics (age, educational degree, years of teaching experience). Confirmatory factor analyses (CFAs) of CSS data from 317 classrooms were carried out to assess the level of empirical support for (a) a 4 first-order factor theory concerning teachers' instructional practices, and (b) a 4 first-order factor theory concerning teachers' behavior management practice. Several fit indices indicated acceptable fit of the (a) and (b) CFA models to the data, as well as acceptable fit of less parsimonious alternative CFA models that included 1 or 2 second-order factors. Information-theory-based indices generally suggested that the (a) and (b) CFA models fit better than some more parsimonious alternative CFA models that included constraints on relations of first-order factors. Overall, CFA first-order and higher order factor results support the CSS-Observer Total, Composite, and subscales. Suggestions for future measurement development efforts are outlined. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours.more » In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters« less
2010-01-01
Background Today, many organizations have adopted some kind of empowerment initiative for at least part of their workforce. Over the last two decades, two complementary perspectives on empowerment at work have emerged: structural and psychological empowerment. Psychological empowerment is a motivational construct manifested in four cognitions: meaning, competence, self-determination and impact. The aim of this article is to examine the construct validity and reliability of the Turkish translation of Spreitzer's psychological empowerment scale in a culturally diverse environment. Methods The scale contains four dimensions over 12 statements. Data were gathered from 260 nurses and 161 physicians. The dimensionality of the scale was evaluated by exploratory factor analyses. To investigate the multidimensional nature of the empowerment construct and the validity of the scale, first- and second-order confirmatory factor analysis was conducted. Furthermore, Cronbach alpha coefficients were assessed to investigate reliability. Results Exploratory factor analyses revealed that four factors in both solutions. The first- and second-order factor analysis indicated an acceptable fit between the data and the theoretical model for nurses and physicians. Cronbach alpha coefficients varied between 0.81-0.94 for both groups, which may be considered satisfactory. Conclusions The analyses indicated that the psychometric properties of the Turkish version of the scale can be considered satisfactory. PMID:20214770
Second-Order Conditioning of Human Causal Learning
ERIC Educational Resources Information Center
Jara, Elvia; Vila, Javier; Maldonado, Antonio
2006-01-01
This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…
Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Good, Brian; Ferrante, John
1996-01-01
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
Rösch, Petra; Harz, Michaela; Schmitt, Michael; Peschke, Klaus-Dieter; Ronneberger, Olaf; Burkhardt, Hans; Motzkus, Hans-Walter; Lankers, Markus; Hofer, Stefan; Thiele, Hans; Popp, Jürgen
2005-03-01
Microorganisms, such as bacteria, which might be present as contamination inside an industrial food or pharmaceutical clean room process need to be identified on short time scales in order to minimize possible health hazards as well as production downtimes causing financial deficits. Here we describe the first results of single-particle micro-Raman measurements in combination with a classification method, the so-called support vector machine technique, allowing for a fast, reliable, and nondestructive online identification method for single bacteria.
Rösch, Petra; Harz, Michaela; Schmitt, Michael; Peschke, Klaus-Dieter; Ronneberger, Olaf; Burkhardt, Hans; Motzkus, Hans-Walter; Lankers, Markus; Hofer, Stefan; Thiele, Hans; Popp, Jürgen
2005-01-01
Microorganisms, such as bacteria, which might be present as contamination inside an industrial food or pharmaceutical clean room process need to be identified on short time scales in order to minimize possible health hazards as well as production downtimes causing financial deficits. Here we describe the first results of single-particle micro-Raman measurements in combination with a classification method, the so-called support vector machine technique, allowing for a fast, reliable, and nondestructive online identification method for single bacteria. PMID:15746368
Rahmani, Azam; Merghati-Khoei, Effat; Moghadam-Banaem, Lida; Hajizadeh, Ebrahim; Hamdieh, Mostafa; Montazeri, Ali
2014-06-13
Premarital sexual behaviors are important issue for women's health. The present study was designed to develop and examine the psychometric properties of a scale in order to identify young women who are at greater risk of premarital sexual behavior. This was an exploratory mixed method investigation. Indeed, the study was conducted in two phases. In the first phase, qualitative methods (focus group discussion and individual interview) were applied to generate items and develop the questionnaire. In the second phase, psychometric properties (validity and reliability) of the questionnaire were assessed. In the first phase an item pool containing 53 statements related to premarital sexual behavior was generated. In the second phase item reduction was applied and the final version of the questionnaire containing 26 items was developed. The psychometric properties of this final version were assessed and the results showed that the instrument has a good structure, and reliability. The results from exploratory factory analysis indicated a 5-factor solution for the instrument that jointly accounted for the 57.4% of variance observed. The Cronbach's alpha coefficient for the instrument was found to be 0.87. This study provided a valid and reliable scale to identify premarital sexual behavior in young women. Assessment of premarital sexual behavior might help to improve women's sexual abstinence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
NASA Astrophysics Data System (ADS)
Zima, W.; Kolenberg, K.; Briquet, M.; Breger, M.
2004-06-01
We have carried out a Hare-and-Hound test to determine the reliability of the Moment Method (Briquet & Aerts 2003) and the Pixel-by-Pixel Method (Mantegazza 2000) for the identification of pulsation modes in Delta Scuti stars. For this purpose we calculated synthetic line profiles, exhibiting six pulsation modes of low degree and with input parameters initially unknown to us. The aim was to test and increase the quality of the mode identification by applying both methods independently and by using a combined technique. Our results show that, whereas the azimuthal order m and its sign can be fixed by both methods, the degree l is not determined unambiguously. Both identification methods show a better reliability if multiple modes are fitted simultaneously. In particular, the inclination angle is better determined. We have to emphasize that the outcome of this test is only meaningful for stars having pulsational velocities below 0.2 vsini. This is the first part of a series of articles, in which we will test these spectroscopic identification methods.
Method of implementing digital phase-locked loops
NASA Technical Reports Server (NTRS)
Stephens, Scott A. (Inventor); Thomas, Jess Brooks, Jr. (Inventor)
1993-01-01
In a new formulation for digital phase-locked loops, loop-filter constants are determined from loop roots that can each be selectively placed in the s-plane on the basis of a new set of parameters, each with simple and direct physical meaning in terms of loop noise bandwidth, root-specific decay rate, or root-specific damping. Loops of first to fourth order are treated in the continuous-update approximation (BLT yields 0) and in a discrete-update formulation with arbitrary BLT. Deficiencies of the continuous-update approximation in large-BLT applications are avoided in the new discrete-update formulation. A new method for direct, transient-free acquisition with third- and fourth-order loops can improve the versatility and reliability of acquisition with such loops.
An improved TV caption image binarization method
NASA Astrophysics Data System (ADS)
Jiang, Mengdi; Cheng, Jianghua; Chen, Minghui; Ku, Xishu
2018-04-01
TV Video caption image binarization has important influence on semantic video retrieval. An improved binarization method for caption image is proposed in this paper. In order to overcome the shortcomings of ghost and broken strokes problems of traditional Niblack method, the method has considered the global information of the images and the local information of the images. First, Tradition Otsu and Niblack thresholds are used for initial binarization. Second, we introduced the difference between maximum and minimum values in the local window as a third threshold to generate two images. Finally, with a logic AND operation of the two images, great results were obtained. The experiment results prove that the proposed method is reliable and effective.
Specimen preparation for NanoSIMS analysis of biological materials
NASA Astrophysics Data System (ADS)
Grovenor, C. R. M.; Smart, K. E.; Kilburn, M. R.; Shore, B.; Dilworth, J. R.; Martin, B.; Hawes, C.; Rickaby, R. E. M.
2006-07-01
In order to achieve reliable and reproducible analysis of biological materials by SIMS, it is critical both that the chosen specimen preparation method does not modify substantially the in vivo chemistry that is the focus of the study and that any chemical information obtained can be calibrated accurately by selection of appropriate standards. In Oxford, we have been working with our new Cameca NanoSIMS50 on two very distinct classes of biological materials; the first where the sample preparation problems are relatively undemanding - human hair - but calibration for trace metal analysis is a critical issue and, the second, marine coccoliths and hyperaccumulator plants where reliable specimen preparation by rapid freezing and controlled drying to preserve the distribution of diffusible species is the first and most demanding requirement, but worthwhile experiments on tracking key elements can still be undertaken even when it is clear that some redistribution of the most diffusible ions has occurred.
Reliability prediction of large fuel cell stack based on structure stress analysis
NASA Astrophysics Data System (ADS)
Liu, L. F.; Liu, B.; Wu, C. W.
2017-09-01
The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.
NASA Astrophysics Data System (ADS)
Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo
2018-06-01
We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.
NASA Astrophysics Data System (ADS)
Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo
2018-01-01
We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.
Courses of Action to Optimize Heavy Bearings Cages
NASA Astrophysics Data System (ADS)
Szekely, V. G.
2016-11-01
The global expansion in the industrial, economically and technological context determines the need to develop products, technologies, processes and methods which ensure increased performance, lower manufacturing costs and synchronization of the main costs reported to the elementary values which correspond to utilization”. The development trend of the heavy bearing industry and the wide use of bearings determines the necessity of choosing the most appropriate material for a given application in order to meet the cumulative requirements of durability, reliability, strength, etc. Evaluation of commonly known or new materials represents a fundamental criterion, in order to choose the materials based on the cost, machinability and the technological process. In order to ensure the most effective basis for the decision, regarding the heavy bearing cage, in the first stage the functions of the product are established and in a further step a comparative analysis of the materials is made in order to establish the best materials which satisfy the product functions. The decision for selecting the most appropriate material is based largely on the overlapping of the material costs and manufacturing process during which the half-finished material becomes a finished product. The study is orientated towards a creative approach, especially towards innovation and reengineering by using specific techniques and methods applied in inventics. The main target is to find new efficient and reliable constructive and/or technological solutions which are consistent with the concept of sustainable development.
Subject-level reliability analysis of fast fMRI with application to epilepsy.
Hao, Yongfu; Khoo, Hui Ming; von Ellenrieder, Nicolas; Gotman, Jean
2017-07-01
Recent studies have applied the new magnetic resonance encephalography (MREG) sequence to the study of interictal epileptic discharges (IEDs) in the electroencephalogram (EEG) of epileptic patients. However, there are no criteria to quantitatively evaluate different processing methods, to properly use the new sequence. We evaluated different processing steps of this new sequence under the common generalized linear model (GLM) framework by assessing the reliability of results. A bootstrap sampling technique was first used to generate multiple replicated data sets; a GLM with different processing steps was then applied to obtain activation maps, and the reliability of these maps was assessed. We applied our analysis in an event-related GLM related to IEDs. A higher reliability was achieved by using a GLM with head motion confound regressor with 24 components rather than the usual 6, with an autoregressive model of order 5 and with a canonical hemodynamic response function (HRF) rather than variable latency or patient-specific HRFs. Comparison of activation with IED field also favored the canonical HRF, consistent with the reliability analysis. The reliability analysis helps to optimize the processing methods for this fast fMRI sequence, in a context in which we do not know the ground truth of activation areas. Magn Reson Med 78:370-382, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Application of a swarm-based approach for phase unwrapping
NASA Astrophysics Data System (ADS)
da S. Maciel, Lucas; Albertazzi G., Armando, Jr.
2014-07-01
An algorithm for phase unwrapping based on swarm intelligence is proposed. The novel approach is based on the emergent behavior of swarms. This behavior is the result of the interactions between independent agents following a simple set of rules and is regarded as fast, flexible and robust. The rules here were designed with two purposes. Firstly, the collective behavior must result in a reliable map of the unwrapped phase. The unwrapping reliability was evaluated by each agent during run-time, based on the quality of the neighboring pixels. In addition, the rule set must result in a behavior that focuses on wrapped regions. Stigmergy and communication rules were implemented in order to enable each agent to seek less worked areas of the image. The agents were modeled as Finite-State Machines. Based on the availability of unwrappable pixels, each agent assumed a different state in order to better adapt itself to the surroundings. The implemented rule set was able to fulfill the requirements on reliability and focused unwrapping. The unwrapped phase map was comparable to those from established methods as the agents were able to reliably evaluate each pixel quality. Also, the unwrapping behavior, being observed in real time, was able to focus on workable areas as the agents communicated in order to find less traveled regions. The results were very positive for such a new approach to the phase unwrapping problem. Finally, the authors see great potential for future developments concerning the flexibility, robustness and processing times of the swarm-based algorithm.
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Determination of mercury evasion in a contaminated headwater stream.
Maprani, Antu C; Al, Tom A; Macquarrie, Kerry T; Dalziel, John A; Shaw, Sean A; Yeats, Phillip A
2005-03-15
Evasion from first- and second-order streams in a watershed may be a significant factor in the atmospheric recycling of volatile pollutants such as mercury; however, methods developed for the determination of Hg evasion rates from larger water bodies are not expected to provide satisfactory results in highly turbulent and morphologically complex first- and second-order streams. A new method for determining the Hg evasion rates from these streams, involving laboratory gas-indexing experiments and field tracer tests, was developed in this study to estimate the evasion rate of Hg from Gossan Creek, a first-order stream in the Upsalquitch River watershed in northern New Brunswick, Canada. Gossan Creek receives Hg-contaminated groundwater discharge from a gold mine tailings pile. Laboratory gas-indexing experiments provided the ratio of gas-exchange coefficients for zero-valent Hg to propane (tracer gas) of 0.81+/-0.16, suggesting that the evasion mechanism in highly turbulent systems can be described by the surface renewal model with an additional component of enhanced gas evasion probably related to the formation of bubbles. Deliberate field tracer tests with propane and chloride tracers were found to be a reliable and practical method for the determination of gas-exchange coefficients for small streams. Estimation of Hg evasion from the first 1 km of Gossan Creek indicates that about 6.4 kg of Hg per year is entering the atmosphere, which is a significant fraction of the regional sources of Hg to the atmosphere.
Issues in benchmarking human reliability analysis methods : a literature review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Caffeine's Influence on Nicotine's Effects in Nonsmokers
Blank, Melissa D.; Kleykamp, Bethea A.; Jennings, Janine M.; Eissenberg, Thomas
2011-01-01
Objective To determine if nicotine's effects are influenced by caffeine in nonsmoking, moderate-caffeine consuming individuals (N=20). Methods The first 3 sessions included one of 3 randomly ordered, double-blind caffeine doses (0, 75, or 150 mg, oral [po]) and 2 single-blind nicotine gum doses (2 and 4 mg) in ascending order. The fourth session (single blind) repeated the 0 mg caffeine condition. Results Nicotine increased heart rate and subjective ratings indicative of aversive effects, and decreased reaction times. These effects were independent of caffeine dose and reliable across sessions. Conclusions In nonsmokers, nicotine effects are not influenced by moderate caffeine doses. PMID:17555378
A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.
Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing
2018-04-10
In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.
A study on reliability of power customer in distribution network
NASA Astrophysics Data System (ADS)
Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin
2017-05-01
The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.
NASA Astrophysics Data System (ADS)
Pei, Zongrui; Eisenbach, Markus; Stocks, G. Malcolm
Simulating order-disorder phase transitions in magnetic materials requires the accurate treatment of both the atomic and magnetic interactions, which span a vast configuration space. Using FeCo as a prototype system, we demonstrate that this can be addressed by combining the Locally Self-consistent Multiple Scattering (LSMS) method with the Wang-Landau (WL) Monte-Carlo algorithm. Fe-Co based materials are interesting magnetic materials but a reliable phase diagram of the binary Fe-Co system is still difficult to obtain. Using the combined WL-LSMS method we clarify the existence of the disordered A2 phase and predict the Curie temperature between it and the ordered B2 phase. The WL-LSMS method is readily applicable to the study of second-order phase transitions in other binary and multi-component alloys, thereby providing a means to the direct simulation of order-disorder phase transitions in complex alloys without need of intervening classical model Hamiltonians. We also demonstrate the capability of our method to guide the design of new magnetic materials. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
LD-SPatt: large deviations statistics for patterns on Markov chains.
Nuel, G
2004-01-01
Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.
Validity and Reliability of the Academic Resilience Scale in Turkish High School
ERIC Educational Resources Information Center
Kapikiran, Sahin
2012-01-01
The present study aims to determine the validity and reliability of the academic resilience scale in Turkish high school. The participances of the study includes 378 high school students in total (192 female and 186 male). A set of analyses were conducted in order to determine the validity and reliability of the study. Firstly, both exploratory…
Validation of the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch Analysis.
Garzón Umerenkova, Angélica; de la Fuente Arias, Jesús; Martínez-Vicente, José Manuel; Zapata Sevillano, Lucía; Pichardo, Mari Carmen; García-Berbén, Ana Belén
2017-01-01
Background: The aim of the study was to psychometrically characterize the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch analysis. Materials and Methods: 831 Spaniard university students (262 men), between 17 and 39 years of age and ranging from the first to the 5th year of studies, completed the SSSRQ questionnaire. Confirmatory factor analysis (CFA) was carried out in order to establish structural adequacy. Afterward, by means of the Rasch model, a study of each sub scale was conducted to test for dimensionality, fit of the sample questions, functionality of the response categories, reliability and estimation of Differential Item Functioning by gender and course. Results: The four sub-scales comply with the unidimensionality criteria, the questions are in line with the model, the response categories operate properly and the reliability of the sample is acceptable. Nonetheless, the test could benefit from the inclusion of additional items of both high and low difficulty in order to increase construct validity, discrimination and reliability for the respondents. Several items with differences in gender and course were also identified. Discussion: The results evidence the need and adequacy of this complementary psychometric analysis strategy, in relation to the CFA to enhance the instrument.
Validation of the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch Analysis
Garzón Umerenkova, Angélica; de la Fuente Arias, Jesús; Martínez-Vicente, José Manuel; Zapata Sevillano, Lucía; Pichardo, Mari Carmen; García-Berbén, Ana Belén
2017-01-01
Background: The aim of the study was to psychometrically characterize the Spanish Short Self-Regulation Questionnaire (SSSRQ) through Rasch analysis. Materials and Methods: 831 Spaniard university students (262 men), between 17 and 39 years of age and ranging from the first to the 5th year of studies, completed the SSSRQ questionnaire. Confirmatory factor analysis (CFA) was carried out in order to establish structural adequacy. Afterward, by means of the Rasch model, a study of each sub scale was conducted to test for dimensionality, fit of the sample questions, functionality of the response categories, reliability and estimation of Differential Item Functioning by gender and course. Results: The four sub-scales comply with the unidimensionality criteria, the questions are in line with the model, the response categories operate properly and the reliability of the sample is acceptable. Nonetheless, the test could benefit from the inclusion of additional items of both high and low difficulty in order to increase construct validity, discrimination and reliability for the respondents. Several items with differences in gender and course were also identified. Discussion: The results evidence the need and adequacy of this complementary psychometric analysis strategy, in relation to the CFA to enhance the instrument. PMID:28298898
NASA Astrophysics Data System (ADS)
Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah
2017-08-01
The concept of Z-number which was introduced by Zadeh in 2010 has captured attention by many due to its enormous applications in the area of Computing with Words (CWW). A Z-number is an ordered pair of fuzzy numbers, (A, R), where A essentially plays the role of fuzzy restriction which is a real-valued uncertain variable and R is a measure of reliability of the first component. Besides its theoretical development, Z-numbers have been successfully applied to decision making problems under uncertain environment. In any decision making evaluation using Z-number, ideally the final outcome of the calculation should also be in Z-number. A question will arise: how do we order the Z-numbers so that the preference of the alternatives can be ranked appropriately? In this paper, we propose a method of ordering the Z-number via the transformation of the Z-numbers to fuzzy numbers. The Z-number will then be ranked using a ranking fuzzy number method. The proposed method will be tested in several combinations of Z-numbers to investigate its effectiveness. The effect of different values of A and R towards the ordering of Z-numbers is analyzed and discussed.
Harz, M; Rösch, P; Peschke, K-D; Ronneberger, O; Burkhardt, H; Popp, J
2005-11-01
Microbial contamination is not only a medical problem, but also plays a large role in pharmaceutical clean room production and food processing technology. Therefore many techniques were developed to achieve differentiation and identification of microorganisms. Among these methods vibrational spectroscopic techniques (IR, Raman and SERS) are useful tools because of their rapidity and sensitivity. Recently we have shown that micro-Raman spectroscopy in combination with a support vector machine is an extremely capable approach for a fast and reliable, non-destructive online identification of single bacteria belonging to different genera. In order to simulate different environmental conditions we analyzed in this contribution different Staphylococcus strains with varying cultivation conditions in order to evaluate our method with a reliable dataset. First, micro-Raman spectra of the bulk material and single bacterial cells that were grown under the same conditions were recorded and used separately for a distinct chemotaxonomic classification of the strains. Furthermore Raman spectra were recorded from single bacterial cells that were cultured under various conditions to study the influence of cultivation on the discrimination ability. This dataset was analyzed both with a hierarchical cluster analysis (HCA) and a support vector machine (SVM).
Modeling the Monthly Water Balance of a First Order Coastal Forested Watershed
S. V. Harder; Devendra M. Amatya; T. J. Callahan; Carl C. Trettin
2006-01-01
A study has been conducted to evaluate a spreadsheet-based conceptual Thornthwaite monthly water balance model and the process-based DRAINMOD model for their reliability in predicting monthly water budgets of a poorly drained, first order forested watershed at the Santee Experimental Forest located along the Lower Coastal Plain of South Carolina. Measured precipitation...
ERIC Educational Resources Information Center
Pantzare, Anna Lind
2015-01-01
In most large-scale assessment systems a set of rather expensive external quality controls are implemented in order to guarantee the quality of interrater reliability. This study empirically examines if teachers' ratings of national tests in mathematics can be reliable without using monitoring, training, or other methods of external quality…
Oster, Natalia V; Carney, Patricia A; Allison, Kimberly H; Weaver, Donald L; Reisch, Lisa M; Longton, Gary; Onega, Tracy; Pepe, Margaret; Geller, Berta M; Nelson, Heidi D; Ross, Tyler R; Tosteson, Aanna N A; Elmore, Joann G
2013-02-05
Diagnostic test sets are a valuable research tool that contributes importantly to the validity and reliability of studies that assess agreement in breast pathology. In order to fully understand the strengths and weaknesses of any agreement and reliability study, however, the methods should be fully reported. In this paper we provide a step-by-step description of the methods used to create four complex test sets for a study of diagnostic agreement among pathologists interpreting breast biopsy specimens. We use the newly developed Guidelines for Reporting Reliability and Agreement Studies (GRRAS) as a basis to report these methods. Breast tissue biopsies were selected from the National Cancer Institute-funded Breast Cancer Surveillance Consortium sites. We used a random sampling stratified according to woman's age (40-49 vs. ≥50), parenchymal breast density (low vs. high) and interpretation of the original pathologist. A 3-member panel of expert breast pathologists first independently interpreted each case using five primary diagnostic categories (non-proliferative changes, proliferative changes without atypia, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma). When the experts did not unanimously agree on a case diagnosis a modified Delphi method was used to determine the reference standard consensus diagnosis. The final test cases were stratified and randomly assigned into one of four unique test sets. We found GRRAS recommendations to be very useful in reporting diagnostic test set development and recommend inclusion of two additional criteria: 1) characterizing the study population and 2) describing the methods for reference diagnosis, when applicable.
May, Michael R; Moore, Brian R
2016-11-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
May, Michael R.; Moore, Brian R.
2016-01-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081
George L. Peterson; Thomas C. Brown
1998-01-01
The paired comparison (PC) method is used to investigate reliability, transitivity, and decision time for binary choices among goods and sums of money. The PC method reveals inconsistent choices and yields individual preference order over the set of items being compared. The data reported support the transitivity assumption and demonstrate high reliability for...
Quantitative metal magnetic memory reliability modeling for welded joints
NASA Astrophysics Data System (ADS)
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
On the probability of exceeding allowable leak rates through degraded steam generator tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cizelj, L.; Sorsek, I.; Riesch-Oppermann, H.
1997-02-01
This paper discusses some possible ways of predicting the behavior of the total leak rate through the damaged steam generator tubes. This failure mode is of special concern in cases where most through-wall defects may remain In operation. A particular example is the application of alternate (bobbin coil voltage) plugging criterion to Outside Diameter Stress Corrosion Cracking at the tube support plate intersections. It is the authors aim to discuss some possible modeling options that could be applied to solve the problem formulated as: Estimate the probability that the sum of all individual leak rates through degraded tubes exceeds themore » predefined acceptable value. The probabilistic approach is of course aiming at reliable and computationaly bearable estimate of the failure probability. A closed form solution is given for a special case of exponentially distributed individual leak rates. Also, some possibilities for the use of computationaly efficient First and Second Order Reliability Methods (FORM and SORM) are discussed. The first numerical example compares the results of approximate methods with closed form results. SORM in particular shows acceptable agreement. The second numerical example considers a realistic case of NPP in Krsko, Slovenia.« less
Accuracy and reliability of pulp/tooth area ratio in upper canines by peri-apical X-rays.
Azevedo, A C; Michel-Crosato, E; Biazevic, M G H; Galić, I; Merelli, V; De Luca, S; Cameriere, R
2014-11-01
Due to the real need for careful staff training in age assessment, in order to improve capacity, consistency and competence, new research on the reliability and repeatability of methods frequently used in age assessment are required. The aim of this study was twofold: first, to test the accuracy of this method for age estimation; second, to obtain data on the reliability of this technique. A sample of 81 peri-apical radiographs of upper canines (44 men and 37 women), aged between 19 and 74years, was used; the teeth were taken from the osteological collection of Sassari (Sardinia, Italy). Three blinded observers used the technique in order to perform the age estimation. The mean real age of the 81 observations was 37.21 (CI95% 34.37 40.05), and estimated ages ranged from 36.65 to 38.99 (CI95%-Ex1 35.42; 41.28; CI95%-Ex2 33.89; 39.41; CI95%-Ex3 35.92; 42.06). The module differences found by the three observers were 3.43, 4.24 and 4.45, respectively for Ex1×Ex2, Ex1×Ex3 and Ex2×Ex3. The module differences observed among real and observed ages were 2.55 (CI95% 1.90; 3.20), 2.22 (CI95% 1.65; 2.78) and 4.39 (CI95% 3.80; 5.75), respectively for Ex1, Ex2 and Ex3. No differences were observed among measurements. This technique can be reproduced and repeated after proper training, since it was found high reliability and accuracy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Dengfeng; Cai, Kefang
2018-04-01
This article presents a hybrid method combining a modified non-dominated sorting genetic algorithm (MNSGA-II) with grey relational analysis (GRA) to improve the static-dynamic performance of a body-in-white (BIW). First, an implicit parametric model of the BIW was built using SFE-CONCEPT software, and then the validity of the implicit parametric model was verified by physical testing. Eight shape design variables were defined for BIW beam structures based on the implicit parametric technology. Subsequently, MNSGA-II was used to determine the optimal combination of the design parameters that can improve the bending stiffness, torsion stiffness and low-order natural frequencies of the BIW without considerable increase in the mass. A set of non-dominated solutions was then obtained in the multi-objective optimization design. Finally, the grey entropy theory and GRA were applied to rank all non-dominated solutions from best to worst to determine the best trade-off solution. The comparison between the GRA and the technique for order of preference by similarity to ideal solution (TOPSIS) illustrated the reliability and rationality of GRA. Moreover, the effectiveness of the hybrid method was verified by the optimal results such that the bending stiffness, torsion stiffness, first order bending and first order torsion natural frequency were improved by 5.46%, 9.30%, 7.32% and 5.73%, respectively, with the mass of the BIW increasing by 1.30%.
NASA Astrophysics Data System (ADS)
Golmakani, M. E.; Malikan, M.; Sadraee Far, M. N.; Majidi, H. R.
2018-06-01
This paper presents a formulation based on simple first-order shear deformation theory (S-FSDT) for large deflection and buckling of orthotropic single-layered graphene sheets (SLGSs). The S-FSDT has many advantages compared to the classical plate theory (CPT) and conventional FSDT such as needless of shear correction factor, containing less number of unknowns than the existing FSDT and strong similarities with the CPT. Governing equations and boundary conditions are derived based on Hamilton’s principle using the nonlocal differential constitutive relations of Eringen and von Kármán geometrical model. Numerical results are obtained using differential quadrature (DQ) method and the Newton–Raphson iterative scheme. Finally, some comparison studies are carried out to show the high accuracy and reliability of the present formulations compared to the nonlocal CPT and FSDT for different thicknesses, elastic foundations and nonlocal parameters.
A Second-Order Confirmatory Factor Analysis of the Moral Distress Scale-Revised for Nurses.
Sharif Nia, Hamid; Shafipour, Vida; Allen, Kelly-Ann; Heidari, Mohammad Reza; Yazdani-Charati, Jamshid; Zareiyan, Armin
2017-01-01
Moral distress is a growing problem for healthcare professionals that may lead to dissatisfaction, resignation, or occupational burnout if left unattended, and nurses experience different levels of this phenomenon. This study aims to investigate the factor structure of the Persian version of the Moral Distress Scale-Revised in intensive care and general nurses. This methodological research was conducted with 771 nurses from eight hospitals in the Mazandaran Province of Iran in 2017. Participants completed the Moral Distress Scale-Revised, data collected, and factor structure assessed using the construct, convergent, and divergent validity methods. The reliability of the scale was assessed using internal consistency (Cronbach's alpha, Theta, and McDonald's omega coefficients) and construct reliability. Ethical considerations: This study was approved by the Ethics Committee of Mazandaran University of Medical Sciences. The exploratory factor analysis ( N = 380) showed that the Moral Distress Scale-Revised has five factors: lack of professional competence at work, ignoring ethical issues and patient conditions, futile care, carrying out the physician's orders without question and unsafe care, and providing care under personal and organizational pressures, which explained 56.62% of the overall variance. The confirmatory factor analysis ( N = 391) supported the five-factor solution and the second-order latent factor model. The first-order model did not show a favorable convergent and divergent validity. Ultimately, the Moral Distress Scale-Revised was found to have a favorable internal consistency and construct reliability. The Moral Distress Scale-Revised was found to be a multidimensional construct. The data obtained confirmed the hypothesis of the factor structure model with a latent second-order variable. Since the convergent and divergent validity of the scale were not confirmed in this study, further assessment is necessary in future studies.
Test-Retest Reliability of “High-Order” Functional Connectivity in Young Healthy Adults
Zhang, Han; Chen, Xiaobo; Zhang, Yu; Shen, Dinggang
2017-01-01
Functional connectivity (FC) has become a leading method for resting-state functional magnetic resonance imaging (rs-fMRI) analysis. However, the majority of the previous studies utilized pairwise, temporal synchronization-based FC. Recently, high-order FC (HOFC) methods were proposed with the idea of computing “correlation of correlations” to capture high-level, more complex associations among the brain regions. There are two types of HOFC. The first type is topographical profile similarity-based HOFC (tHOFC) and its variant, associated HOFC (aHOFC), for capturing different levels of HOFC. Instead of measuring the similarity of the original rs-fMRI signals with the traditional FC (low-order FC, or LOFC), tHOFC measures the similarity of LOFC profiles (i.e., a set of LOFC values between a region and all other regions) between each pair of brain regions. The second type is dynamics-based HOFC (dHOFC) which defines the quadruple relationship among every four brain regions by first calculating two pairwise dynamic LOFC “time series” and then measuring their temporal synchronization (i.e., temporal correlation of the LOFC fluctuations, not the BOLD fluctuations). Applications have shown the superiority of HOFC in both disease biomarker detection and individualized diagnosis than LOFC. However, no study has been carried out for the assessment of test-retest reliability of different HOFC metrics. In this paper, we systematically evaluate the reliability of the two types of HOFC methods using test-retest rs-fMRI data from 25 (12 females, age 24.48 ± 2.55 years) young healthy adults with seven repeated scans (with interval = 3–8 days). We found that all HOFC metrics have satisfactory reliability, specifically (1) fair-to-good for tHOFC and aHOFC, and (2) fair-to-moderate for dHOFC with relatively strong connectivity strength. We further give an in-depth analysis of the biological meanings of each HOFC metric and highlight their differences compared to the LOFC from the aspects of cross-level information exchanges, within-/between-network connectivity, and modulatory connectivity. In addition, how the dynamic analysis parameter (i.e., sliding window length) affects dHOFC reliability is also investigated. Our study reveals unique functional associations characterized by the HOFC metrics. Guidance and recommendations for future applications and clinical research using HOFC are provided. This study has made a further step toward unveiling more complex human brain connectome. PMID:28824362
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Sarma-based key-group method for rock slope reliability analyses
NASA Astrophysics Data System (ADS)
Yarahmadi Bafghi, A. R.; Verdel, T.
2005-08-01
The methods used in conducting static stability analyses have remained pertinent to this day for reasons of both simplicity and speed of execution. The most well-known of these methods for purposes of stability analysis of fractured rock masses is the key-block method (KBM).This paper proposes an extension to the KBM, called the key-group method (KGM), which combines not only individual key-blocks but also groups of collapsable blocks into an iterative and progressive analysis of the stability of discontinuous rock slopes. To take intra-group forces into account, the Sarma method has been implemented within the KGM in order to generate a Sarma-based KGM, abbreviated SKGM. We will discuss herein the hypothesis behind this new method, details regarding its implementation, and validation through comparison with results obtained from the distinct element method.Furthermore, as an alternative to deterministic methods, reliability analyses or probabilistic analyses have been proposed to take account of the uncertainty in analytical parameters and models. The FOSM and ASM probabilistic methods could be implemented within the KGM and SKGM framework in order to take account of the uncertainty due to physical and mechanical data (density, cohesion and angle of friction). We will then show how such reliability analyses can be introduced into SKGM to give rise to the probabilistic SKGM (PSKGM) and how it can be used for rock slope reliability analyses. Copyright
Reliability evaluation of a multistate network subject to time constraint under routing policy
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei
2013-08-01
A multistate network is a stochastic network composed of multistate arcs in which each arc has several possible capacities and may fail due to failure, maintenance, etc. The quality of a multistate network depends on how to meet the customer's requirements and how to provide the service in time. The system reliability, the probability that a given amount of data can be transmitted through a pair of minimal paths (MPs) simultaneously under the time constraint, is a proper index to evaluate the quality of a multistate network. An efficient solution procedure is first proposed to calculate it. In order to further enhance the system reliability, the network administrator decides the routing policy in advance to indicate the first and the second priority pairs of MPs. The second priority pair of MPs takes charge of the transmission duty if the first fails. The system reliability under the routing policy can be subsequently evaluated.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases
Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357
A novel method to handle the effect of uneven sampling effort in biodiversity databases.
Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
Approximating genomic reliabilities for national genomic evaluation
USDA-ARS?s Scientific Manuscript database
With the introduction of standard methods for approximating effective daughter/data contribution by Interbull in 2001, conventional EDC or reliabilities contributed by daughter phenotypes are directly comparable across countries and used in routine conventional evaluations. In order to make publishe...
A hybrid approach to near-optimal launch vehicle guidance
NASA Technical Reports Server (NTRS)
Leung, Martin S. K.; Calise, Anthony J.
1992-01-01
This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.
Efficient method of image edge detection based on FSVM
NASA Astrophysics Data System (ADS)
Cai, Aiping; Xiong, Xiaomei
2013-07-01
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.
Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong
2016-12-01
Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.
Reliability of Hypernasality Rating: Comparison of 3 Different Methods for Perceptual Assessment.
Yamashita, Renata Paciello; Borg, Elisabet; Granqvist, Svante; Lohmander, Anette
2018-01-01
To compare reliability in auditory-perceptual assessment of hypernasality for 3 different methods and to explore the influence of language background. Comparative methodological study. Participants and Materials: Audio recordings of 5-year-old Swedish-speaking children with repaired cleft lip and palate consisting of 73 stimuli of 9 nonnasal single-word strings in 3 different randomized orders. Four experienced speech-language pathologists (2 native speakers of Brazilian-Portuguese and 2 native speakers of Swedish) participated as listeners. After individual training, each listener performed the hypernasality rating task. Each order of stimuli was analyzed individually using the 2-step, VISOR and Borg centiMax scale methods. Comparison of intra- and inter-rater reliability, and consistency for each method within language of the listener and between listener languages (Swedish and Brazilian-Portuguese). Good to excellent intra-rater reliability was found within each listener for all methods, 2-step: κ = 0.59-0.93; VISOR: intraclass correlation coefficient (ICC) = 0.80-0.99; Borg centiMax (cM) scale: ICC = 0.80-1.00. The highest inter-rater reliability was demonstrated for VISOR (ICC = 0.60-0.90) and Borg cM-scale (ICC = 0.40-0.80). High consistency within each method was found with the highest for the Borg cM scale (ICC = 0.89-0.91). There was a significant difference in the ratings between the Swedish and the Brazilian listeners for all methods. The category-ratio scale Borg cM was considered most reliable in the assessment of hypernasality. Language background of Brazilian-Portuguese listeners influenced the perceptual ratings of hypernasality in Swedish speech samples, despite their experience in perceptual assessment of cleft palate speech disorders.
Automated reliability assessment for spectroscopic redshift measurements
NASA Astrophysics Data System (ADS)
Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.
2018-03-01
Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for spectroscopic redshift measurements. This newly-defined method is very promising for next-generation large spectroscopic surveys from the ground and in space, such as Euclid and WFIRST. A table of the reclassified VVDS redshifts and reliability is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A53
de Vroege, Lars; Emons, Wilco H M; Sijtsma, Klaas; van der Feltz-Cornelis, Christina M
2018-01-01
The Bermond-Vorst Alexithymia Questionnaire (BVAQ) has been validated in student samples and small clinical samples, but not in the general population; thus, representative general-population norms are lacking. We examined the factor structure of the BVAQ in Longitudinal Internet Studies for the Social Sciences panel data from the Dutch general population ( N = 974). Factor analyses revealed a first-order five-factor model and a second-order two-factor model. However, in the second-order model, the factor interpreted as analyzing ability loaded on both the affective factor and the cognitive factor. Further analyses showed that the first-order test scores are more reliable than the second-order test scores. External and construct validity were addressed by comparing BVAQ scores with a clinical sample of patients suffering from somatic symptom and related disorder (SSRD) ( N = 235). BVAQ scores differed significantly between the general population and patients suffering from SSRD, suggesting acceptable construct validity. Age was positively associated with alexithymia. Males showed higher levels of alexithymia. The BVAQ is a reliable alternative measure for measuring alexithymia.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi; Zhou, Run
2017-08-01
Using the TL (Tang and Lu, 1993) method, Ornstein-Zernike integral equation is solved perturbatively under the mean spherical approximation (MSA) for fluid with potential consisting of a hard sphere plus square-well plus square-shoulder (HS + SW + SS) to obtain first-order analytic expressions of radial distribution function (RDF), second-order direct correlation function, and semi-analytic expressions for common thermodynamic properties. A comprehensive comparison between the first-order MSA and high temperature series expansion (HTSE) to third-, fifth- and seventh-order is performed over a wide parameter range for both a HS + SW and the HS + SW + SS model fluids by using corresponding ;exact; Monte Carlo results as a reference; although the HTSE is carried out up to seventh-order, and not to the first order as the first-order MSA the comparison is considered fair from a calculation complexity perspective. It is found that the performance of the first-order MSA is dramatically model-dependent: as target potentials go from the HS + SW to the HS + SW + SS, (i) there is a dramatic dropping of performance of the first-order MSA expressions in calculating the thermodynamic properties, especially both the excess internal energy and constant volume excess heat capacity of the HS + SW + SS model cannot be predicted even qualitatively correctly. (ii) One tendency is noticed that the first-order MSA gets more reliable with increasing temperatures in dealing with the pressure, excess Helmholtz free energy, excess enthalpy and excess chemical potential. (iii) Concerning the RDF, the first-order MSA is not as disappointing as it displays in the cases of thermodynamics. (iv) In the case of the HS + SW model, the first-order MSA solution is shown to be quantitatively correct in calculating the pressure and excess chemical potential even if the reduced temperatures are as low as 0.8. On the other hand, the seventh-order HTSE is less model-dependent; in most cases of the HS + SW and the HS + SW + SS models, the seventh-order HTSE improves the fifth- and third-order HTSE in both thermodynamic properties and RDF, and the improvements are very demonstrable in both the excess internal energy and constant volume excess heat capacity; for very limited cases, the seventh-order HTSE improves the fifth-order HTSE only within lower density domain and even shows a bit of inadaptation over higher density domain.
Structural reliability assessment of the Oman India Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Sharif, A.M.; Preston, R.
1996-12-31
Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less
Mash, Bob; Derese, Anselme
2013-01-01
Abstract Background Competency-based education and the validity and reliability of workplace-based assessment of postgraduate trainees have received increasing attention worldwide. Family medicine was recognised as a speciality in South Africa six years ago and a satisfactory portfolio of learning is a prerequisite to sit the national exit exam. A massive scaling up of the number of family physicians is needed in order to meet the health needs of the country. Aim The aim of this study was to develop a reliable, robust and feasible portfolio assessment tool (PAT) for South Africa. Methods Six raters each rated nine portfolios from the Stellenbosch University programme, using the PAT, to test for inter-rater reliability. This rating was repeated three months later to determine test–retest reliability. Following initial analysis and feedback the PAT was modified and the inter-rater reliability again assessed on nine new portfolios. An acceptable intra-class correlation was considered to be > 0.80. Results The total score was found to be reliable, with a coefficient of 0.92. For test–retest reliability, the difference in mean total score was 1.7%, which was not statistically significant. Amongst the subsections, only assessment of the educational meetings and the logbook showed reliability coefficients > 0.80. Conclusion This was the first attempt to develop a reliable, robust and feasible national portfolio assessment tool to assess postgraduate family medicine training in the South African context. The tool was reliable for the total score, but the low reliability of several sections in the PAT helped us to develop 12 recommendations regarding the use of the portfolio, the design of the PAT and the training of raters.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
Residual stress measurement in a metal microdevice by micro Raman spectroscopy
NASA Astrophysics Data System (ADS)
Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi
2017-10-01
Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β-SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β-SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni-SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t-test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size.
ERIC Educational Resources Information Center
Demir, Engin; Budak, Yusuf; Demir, Cennet Gologlu
2017-01-01
The aim of this study was to develop "Perceived Value Scale in regard to Teaching Profession of Prospective Teachers." The validity and reliability analysis of the scale, developed for prospective elementary school teachers, was performed. In order to determine the values of the teaching profession, first of all, the related literature…
Carvalho, Teresa; Cunha, Marina; Pinto-Gouveia, José; Duarte, Joana
2015-03-30
The PTSD Checklist-Military Version (PCL-M) is a brief self-report instrument widely used to assess Post-traumatic Stress Disorder (PTSD) symptomatology in war Veterans, according to DSM-IV. This study sought out to explore the factor structure and reliability of the Portuguese version of the PCL-M. A sample of 660 Portuguese Colonial War Veterans completed the PCL-M. Several Confirmatory Factor Analyses were conducted to test different structures for PCL-M PTSD symptoms. Although the respecified first-order four-factor model based on King et al.'s model showed the best fit to the data, the respecified first and second-order models based on the DSM-IV symptom clusters also presented an acceptable fit. In addition, the PCL-M showed adequate reliability. The Portuguese version of the PCL-M is thus a valid and reliable measure to assess the severity of PTSD symptoms as described in DSM-IV. Its use with Portuguese Colonial War Veterans may ease screening of possible PTSD cases, promote more suitable treatment planning, and enable monitoring of therapeutic outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Quasi-Static Probabilistic Structural Analyses Process and Criteria
NASA Technical Reports Server (NTRS)
Goldberg, B.; Verderaime, V.
1999-01-01
Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.
1975-04-01
to 2-16 Category 3: Fabrie’ition Methods and Techniques 3-01 to 3-21 Category 4: ReliabiLity Studies 4-01 to 4-15 Category 5: C,,rputeiized Analysis...RAC icrodrcuit Thesaurus. The ternis are arranged in alphabetical order with sub-term description followinti each main term. Cosvreferencing is...Reliability aspects of vrocircuit manufacturi’. 4. Reliability Studies : Technics) reports !:datig to ;ormal ve isbbty studies and investi- sations
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Sankararaman, Shankar
2013-01-01
Prognostics is centered on predicting the time of and time until adverse events in components, subsystems, and systems. It typically involves both a state estimation phase, in which the current health state of a system is identified, and a prediction phase, in which the state is projected forward in time. Since prognostics is mainly a prediction problem, prognostic approaches cannot avoid uncertainty, which arises due to several sources. Prognostics algorithms must both characterize this uncertainty and incorporate it into the predictions so that informed decisions can be made about the system. In this paper, we describe three methods to solve these problems, including Monte Carlo-, unscented transform-, and first-order reliability-based methods. Using a planetary rover as a case study, we demonstrate and compare the different methods in simulation for battery end-of-discharge prediction.
Psychometric evaluation of commonly used game-specific skills tests in rugby: A systematic review
Oorschot, Sander; Chiwaridzo, Matthew; CM Smits-Engelsman, Bouwien
2017-01-01
Objectives To (1) give an overview of commonly used game-specific skills tests in rugby and (2) evaluate available psychometric information of these tests. Methods The databases PubMed, MEDLINE CINAHL and Africa Wide information were systematically searched for articles published between January 1995 and March 2017. First, commonly used game-specific skills tests were identified. Second, the available psychometrics of these tests were evaluated and the methodological quality of the studies assessed using the Consensus-based Standards for the selection of health Measurement Instruments checklist. Studies included in the first step had to report detailed information on the construct and testing procedure of at least one game-specific skill, and studies included in the second step had additionally to report at least one psychometric property evaluating reliability, validity or responsiveness. Results 287 articles were identified in the first step, of which 30 articles met the inclusion criteria and 64 articles were identified in the second step of which 10 articles were included. Reactive agility, tackling and simulated rugby games were the most commonly used tests. All 10 studies reporting psychometrics reported reliability outcomes, revealing mainly strong evidence. However, all studies scored poor or fair on methodological quality. Four studies reported validity outcomes in which mainly moderate evidence was indicated, but all articles had fair methodological quality. Conclusion Game-specific skills tests indicated mainly high reliability and validity evidence, but the studies lacked methodological quality. Reactive agility seems to be a promising domain, but the specific tests need further development. Future high methodological quality studies are required in order to develop valid and reliable test batteries for rugby talent identification. Trial registration number PROSPERO CRD42015029747. PMID:29259812
Approximation of reliability of direct genomic breeding values
USDA-ARS?s Scientific Manuscript database
Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...
Developing Confidence Limits For Reliability Of Software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1991-01-01
Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.
Syafiuddin, Achmad; Salmiati, Salmiati; Jonbi, Jonbi; Fulazzaky, Mohamad Ali
2018-07-15
It is the first time to do investigation the reliability and validity of thirty kinetic and isotherm models for describing the behaviors of adsorption of silver nanoparticles (AgNPs) onto different adsorbents. The purpose of this study is therefore to assess the most reliable models for the adsorption of AgNPs onto feasibility of an adsorbent. The fifteen kinetic models and fifteen isotherm models were used to test secondary data of AgNPs adsorption collected from the various data sources. The rankings of arithmetic mean were estimated based on the six statistical analysis methods of using a dedicated software of the MATLAB Optimization Toolbox with a least square curve fitting function. The use of fractal-like mixed 1, 2-order model for describing the adsorption kinetics and that of Fritz-Schlunder and Baudu models for describing the adsorption isotherms can be recommended as the most reliable models for AgNPs adsorption onto the natural and synthetic adsorbent materials. The application of thirty models have been identified for the adsorption of AgNPs to clarify the usefulness of both groups of the kinetic and isotherm equations in the rank order of the levels of accuracy, and this significantly contributes to understandability and usability of the proper models and makes to knowledge beyond the existing literatures. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey
2018-05-01
The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.
An Investment Level Decision Method to Secure Long-term Reliability
NASA Astrophysics Data System (ADS)
Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji
The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.
Varga, Zsuzsanna; Cassoly, Estelle; Li, Qiyu; Oehlschlegel, Christian; Tapia, Coya; Lehr, Hans Anton; Klingbiel, Dirk; Thürlimann, Beat; Ruhstaller, Thomas
2015-01-01
Background Proliferative activity (Ki-67 Labelling Index) in breast cancer increasingly serves as an additional tool in the decision for or against adjuvant chemotherapy in midrange hormone receptor positive breast cancer. Ki-67 Index has been previously shown to suffer from high inter-observer variability especially in midrange (G2) breast carcinomas. In this study we conducted a systematic approach using different Ki-67 assessments on large tissue sections in order to identify the method with the highest reliability and the lowest variability. Materials and Methods Five breast pathologists retrospectively analyzed proliferative activity of 50 G2 invasive breast carcinomas using large tissue sections by assessing Ki-67 immunohistochemistry. Ki-67-assessments were done on light microscopy and on digital images following these methods: 1) assessing five regions, 2) assessing only darkly stained nuclei and 3) considering only condensed proliferative areas (‘hotspots’). An individual review (the first described assessment from 2008) was also performed. The assessments on light microscopy were done by estimating. All measurements were performed three times. Inter-observer and intra-observer reliabilities were calculated using the approach proposed by Eliasziw et al. Clinical cutoffs (14% and 20%) were tested using Fleiss’ Kappa. Results There was a good intra-observer reliability in 5 of 7 methods (ICC: 0.76–0.89). The two highest inter-observer reliability was fair to moderate (ICC: 0.71 and 0.74) in 2 methods (region-analysis and individual-review) on light microscopy. Fleiss’-kappa-values (14% cut-off) were the highest (moderate) using the original recommendation on light-microscope (Kappa 0.58). Fleiss’ kappa values (20% cut-off) were the highest (Kappa 0.48 each) in analyzing hotspots on light-microscopy and digital-analysis. No methodologies using digital-analysis were superior to the methods on light microscope. Conclusion Our results show that all methods on light-microscopy for Ki-67 assessment in large tissue sections resulted in a good intra-observer reliability. Region analysis and individual review (the original recommendation) on light-microscopy yielded the highest inter-observer reliability. These results show slight improvement to previously published data on poor-reproducibility and thus might be a practical-pragmatic way for routine assessment of Ki-67 Index in G2 breast carcinomas. PMID:25885288
Seo, Jeong-Ho; Boedijono, Dimas
2016-01-01
Purpose The aim of this study was to investigate new point-connecting measurements for the hallux valgus angle (HVA) and the first intermetatarsal angle (IMA), which can reflect the degree of subluxation of the first metatarsophalangeal joint (MTPJ). Also, this study attempted to compare the validity of midline measurements and the new point-connecting measurements for the determination of HVA and IMA values. Materials and Methods Sixty feet of hallux valgus patients who underwent surgery between 2007 and 2011 were classified in terms of the severity of HVA, congruency of the first MTPJ, and type of chevron metatarsal osteotomy. On weight-bearing dorsal-plantar radiographs, HVA and IMA values were measured and compared preoperatively and postoperatively using both the conventional and new methods. Results Compared with midline measurements, point-connecting measurements showed higher inter- and intra-observer reliability for preoperative HVA/IMA and similar or higher inter- and intra-observer reliability for postoperative HVA/IMA. Patients who underwent distal chevron metatarsal osteotomy (DCMO) had higher intraclass correlation coefficient for inter- and intra-observer reliability for pre- and post-operative HVA and IMA measured by the point-connecting method compared with the midline method. All differences in the preoperative HVAs and IMAs determined by both the midline method and point-connecting methods were significant between the deviated group and subluxated groups (p=0.001). Conclusion The point-connecting method for measuring HVA and IMA in the subluxated first MTPJ may better reflect the severity of a HV deformity with higher reliability than the midline method, and is more useful in patients with DCMO than in patients with proximal chevron metatarsal osteotomy. PMID:26996576
Park, J H; Chang, B U; Kim, Y J; Seo, J S; Choi, S W; Yun, J Y
2008-12-01
A new method has been developed for analyzing (137)Cs in a small volume of seawater. Ammonium 12-molybdophosphate (AMP) was used two times during pretreatment procedure. The first step was to adsorb (137)Cs in seawater samples into AMP in order to reduce sample volume, and the second was to remove (87)Rb, interference nuclide for beta counting. The AMP adsorbing (137)Cs was dissolved by sodium hydroxide solution, and then (137)Cs was finally formed to be cesium chloroplatinate precipitate by adding 10% hexachloroplatinic acid. The beta rays emitted from (137)Cs were measured with a low background gas-proportional alpha/beta counter. This method was applied to several seawater samples taken in the East Sea of Korea. Compared to the routinely used gamma-spectrometry method, this new AMP method was reliable and suitable for analyzing (137)Cs in deep seawater.
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Liu, Youhua
2000-01-01
At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.
Mission Reliability Estimation for Repairable Robot Teams
NASA Technical Reports Server (NTRS)
Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen
2010-01-01
A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.
Genetic analysis of longevity in Dutch dairy cattle using random regression.
van Pelt, M L; Meuwissen, T H E; de Jong, G; Veerkamp, R F
2015-06-01
Longevity, productive life, or lifespan of dairy cattle is an important trait for dairy farmers, and it is defined as the time from first calving to the last test date for milk production. Methods for genetic evaluations need to account for censored data; that is, records from cows that are still alive. The aim of this study was to investigate whether these methods also need to take account of survival being genetically a different trait across the entire lifespan of a cow. The data set comprised 112,000 cows with a total of 3,964,449 observations for survival per month from first calving until 72 mo in productive life. A random regression model with second-order Legendre polynomials was fitted for the additive genetic effect. Alternative parameterizations were (1) different trait definitions for the length of time interval for survival after first calving (1, 3, 6, and 12 mo); (2) linear or threshold model; and (3) differing the order of the Legendre polynomial. The partial derivatives of a profit function were used to transform variance components on the survival scale to those for lifespan. Survival rates were higher in early life than later in life (99 vs. 95%). When survival was defined over 12-mo intervals survival curves were smooth compared with curves when 1-, 3-, or 6-mo intervals were used. Heritabilities in each interval were very low and ranged from 0.002 to 0.031, but the heritability for lifespan over the entire period of 72 mo after first calving ranged from 0.115 to 0.149. Genetic correlations between time intervals ranged from 0.25 to 1.00. Genetic parameters and breeding values for the genetic effect were more sensitive to the trait definition than to whether a linear or threshold model was used or to the order of Legendre polynomial used. Cumulative survival up to the first 6 mo predicted lifespan with an accuracy of only 0.79 to 0.85; that is, reliability of breeding value with many daughters in the first 6 mo can be, at most, 0.62 to 0.72, and changes of breeding values are still expected when daughters are getting older. Therefore, an improved model for genetic evaluation should treat survival as different traits during the lifespan by splitting lifespan in time intervals of 6 mo or less to avoid overestimated reliabilities and changes in breeding values when daughters are getting older. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1990-01-01
This hardware catalog covers that hardware proposed under the Biomedical Monitoring and Countermeasures Development Program supported by the Johnson Space Center. The hardware items are listed separately by item, and are in alphabetical order. Each hardware item specification consists of four pages. The first page describes background information with an illustration, definition and a history/design status. The second page identifies the general specifications, performance, rack interface requirements, problems, issues, concerns, physical description, and functional description. The level of hardware design reliability is also identified under the maintainability and reliability category. The third page specifies the mechanical design guidelines and assumptions. Described are the material types and weights, modules, and construction methods. Also described is an estimation of percentage of construction which utilizes a particular method, and the percentage of required new mechanical design is documented. The fourth page analyzes the electronics, the scope of design effort, and the software requirements. Electronics are described by percentages of component types and new design. The design effort, as well as, the software requirements are identified and categorized.
Psychometric properties of the Spanish version of the Adolescent Stress Questionnaire (ASQ-S).
Lima, Juan F; Alarcón, Rafael; Escobar, Milagros; Fernández-Baena, F Javier; Muñoz, Ángela M; Blanca, María J
2017-10-01
The aim of this study was to develop a Spanish version of the Adolescent Stress Questionnaire and to examine its psychometric properties: factor structure, measurement invariance across samples, reliability, and concurrent validity. Participants consisted of 1,560 Spanish students between 12 and 18 years of age. The results support a structure based on 10 first-order factors (corresponding to stressors on the dimensions Home Life, School Performance, School Attendance, Romantic Relationships, Peer Pressure, Teacher Interaction, Future Uncertainty, School/Leisure Conflict, Financial Pressure, and Emerging Adult Responsibility) and 1 second-order factor that subsumes the first-order factors. This model was selected for measurement invariance testing because it showed good fit indexes and was more parsimonious than the first-order factor model. This structure was replicated across 2 independent samples from the same population, as well as across 3 age groups (early, middle, and late adolescence), showing acceptable fit for all groups. Internal consistency and test-retest reliability were adequate. Evidence of concurrent validity was provided by positive associations with measures of stress manifestations, anxiety, and depression, and by a negative association with life satisfaction. The results indicate that the Spanish version of the Adolescent Stress Questionnaire is a suitable tool for assessing stressors in Spanish adolescents. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Pneumothorax size measurements on digital chest radiographs: Intra- and inter- rater reliability.
Thelle, Andreas; Gjerdevik, Miriam; Grydeland, Thomas; Skorge, Trude D; Wentzel-Larsen, Tore; Bakke, Per S
2015-10-01
Detailed and reliable methods may be important for discussions on the importance of pneumothorax size in clinical decision-making. Rhea's method is widely used to estimate pneumothorax size in percent based on chest X-rays (CXRs) from three measure points. Choi's addendum is used for anterioposterior projections. The aim of this study was to examine the intrarater and interrater reliability of the Rhea and Choi method using digital CXR in the ward based PACS monitors. Three physicians examined a retrospective series of 80 digital CXRs showing pneumothorax, using Rhea and Choi's method, then repeated in a random order two weeks later. We used the analysis of variance technique by Eliasziw et al. to assess the intrarater and interrater reliability in altogether 480 estimations of pneumothorax size. Estimated pneumothorax sizes ranged between 5% and 100%. The intrarater reliability coefficient was 0.98 (95% one-sided lower-limit confidence interval C 0.96), and the interrater reliability coefficient was 0.95 (95% one-sided lower-limit confidence interval 0.93). This study has shown that the Rhea and Choi method for calculating pneumothorax size has high intrarater and interrater reliability. These results are valid across gender, side of pneumothorax and whether the patient is diagnosed with primary or secondary pneumothorax. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Krasnoshchekov, Sergey V; Isayeva, Elena V; Stepanov, Nikolay F
2012-04-12
Anharmonic vibrational states of semirigid polyatomic molecules are often studied using the second-order vibrational perturbation theory (VPT2). For efficient higher-order analysis, an approach based on the canonical Van Vleck perturbation theory (CVPT), the Watson Hamiltonian and operators of creation and annihilation of vibrational quanta is employed. This method allows analysis of the convergence of perturbation theory and solves a number of theoretical problems of VPT2, e.g., yields anharmonic constants y(ijk), z(ijkl), and allows the reliable evaluation of vibrational IR and Raman anharmonic intensities in the presence of resonances. Darling-Dennison and higher-order resonance coupling coefficients can be reliably evaluated as well. The method is illustrated on classic molecules: water and formaldehyde. A number of theoretical conclusions results, including the necessity of using sextic force field in the fourth order (CVPT4) and the nearly vanishing CVPT4 contributions for bending and wagging modes. The coefficients of perturbative Dunham-type Hamiltonians in high-orders of CVPT are found to conform to the rules of equality at different orders as earlier proven analytically for diatomic molecules. The method can serve as a good substitution of the more traditional VPT2.
NASA Astrophysics Data System (ADS)
Pasam, Gopi Krishna; Manohar, T. Gowri
2016-09-01
Determination of available transfer capability (ATC) requires the use of experience, intuition and exact judgment in order to meet several significant aspects in the deregulated environment. Based on these points, this paper proposes two heuristic approaches to compute ATC. The first proposed heuristic algorithm integrates the five methods known as continuation repeated power flow, repeated optimal power flow, radial basis function neural network, back propagation neural network and adaptive neuro fuzzy inference system to obtain ATC. The second proposed heuristic model is used to obtain multiple ATC values. Out of these, a specific ATC value will be selected based on a number of social, economic, deregulated environmental constraints and related to specific applications like optimization, on-line monitoring, and ATC forecasting known as multi-objective decision based optimal ATC. The validity of results obtained through these proposed methods are scrupulously verified on various buses of the IEEE 24-bus reliable test system. The results presented and derived conclusions in this paper are very useful for planning, operation, maintaining of reliable power in any power system and its monitoring in an on-line environment of deregulated power system. In this way, the proposed heuristic methods would contribute the best possible approach to assess multiple objective ATC using integrated methods.
Data driven CAN node reliability assessment for manufacturing system
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Yuan, Yong; Lei, Yong
2017-01-01
The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.
Stabilization Approaches for Linear and Nonlinear Reduced Order Models
NASA Astrophysics Data System (ADS)
Rezaian, Elnaz; Wei, Mingjun
2017-11-01
It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.
Lakshmanan, Shanmugam; Prakash, Mani; Lim, Chee Peng; Rakkiyappan, Rajan; Balasubramaniam, Pagavathigounder; Nahavandi, Saeid
2018-01-01
In this paper, synchronization of an inertial neural network with time-varying delays is investigated. Based on the variable transformation method, we transform the second-order differential equations into the first-order differential equations. Then, using suitable Lyapunov-Krasovskii functionals and Jensen's inequality, the synchronization criteria are established in terms of linear matrix inequalities. Moreover, a feedback controller is designed to attain synchronization between the master and slave models, and to ensure that the error model is globally asymptotically stable. Numerical examples and simulations are presented to indicate the effectiveness of the proposed method. Besides that, an image encryption algorithm is proposed based on the piecewise linear chaotic map and the chaotic inertial neural network. The chaotic signals obtained from the inertial neural network are utilized for the encryption process. Statistical analyses are provided to evaluate the effectiveness of the proposed encryption algorithm. The results ascertain that the proposed encryption algorithm is efficient and reliable for secure communication applications.
Classification of speech dysfluencies using LPC based parameterization techniques.
Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali
2012-06-01
The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Pozos-Guillén, Amaury; Ruiz-Rodríguez, Socorro; Garrocho-Rangel, Arturo
The main purpose of the second part of this series was to provide the reader with some basic aspects of the most common biostatistical methods employed in health sciences, in order to better understand the validity, significance and reliability of the results from any article on Pediatric Dentistry. Currently, as mentioned in the first paper, Pediatric Dentists need basic biostatistical knowledge to be able to apply it when critically appraise a dental article during the Evidence-based Dentistry (EBD) process, or when participating in the development of a clinical study with dental pediatric patients. The EBD process provides a systematic approach of collecting, review and analyze current and relevant published evidence about oral health care in order to answer a particular clinical question; then this evidence should be applied in everyday practice. This second report describes the most commonly used statistical methods for analyzing and interpret collected data, and the methodological criteria to be considered when choosing the most appropriate tests for a specific study. These are available to Pediatric Dentistry practicants interested in reading or designing original clinical or epidemiological studies.
Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2007-06-01
In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.
Mateescu, Cristina; Popescu, Anca Mihaela; Radu, Gabriel Lucian; Onisei, Tatiana; Raducanu, Adina Elena
2017-01-01
Purpose: This study was carried out in order to find a reliable method for the fast detection of adulterated herbal food supplements with sexual enhancement claims. As some herbal products are advertised as "all natural", their "efficiency" is often increased by addition of active pharmaceutical ingredients such as PDE-5 inhibitors, which can be a real health threat for the consumer. Methodes: Adulterants, potentially present in 50 herbal food supplements with sexual improvement claims, were detected using 2 spectroscopic methods - Raman and Fourier Transform Infrared - known for reliability, reproductibility, and an easy sample preparation. GC-MS technique was used to confirm the potential adulterants spectra. Results: About 22% (11 out of 50 samples) of herbal food supplements with sexual enhancement claims analyzed by spectroscopic and spectrometric methods proved to be "enriched" with active pharmaceutical compounds such as: sildenafil and two of its analogues, tadalafil and phenolphthalein. The occurence of phenolphthalein could be the reason for the non-relevant results obtained by FTIR method in some samples. 91% of the adulterated herbal food supplements were originating from China. Conclusion: The results of this screening highlighted the necessity for an accurate analysis of all alleged herbal aphrodisiacs on the Romanian market. This is a first such a screening analysis carried out on herbal food supplements with sexual enhancement claims. PMID:28761827
Mateescu, Cristina; Popescu, Anca Mihaela; Radu, Gabriel Lucian; Onisei, Tatiana; Raducanu, Adina Elena
2017-06-01
Purpose: This study was carried out in order to find a reliable method for the fast detection of adulterated herbal food supplements with sexual enhancement claims. As some herbal products are advertised as "all natural", their "efficiency" is often increased by addition of active pharmaceutical ingredients such as PDE-5 inhibitors, which can be a real health threat for the consumer. Methodes: Adulterants, potentially present in 50 herbal food supplements with sexual improvement claims, were detected using 2 spectroscopic methods - Raman and Fourier Transform Infrared - known for reliability, reproductibility, and an easy sample preparation. GC-MS technique was used to confirm the potential adulterants spectra. Results: About 22% (11 out of 50 samples) of herbal food supplements with sexual enhancement claims analyzed by spectroscopic and spectrometric methods proved to be "enriched" with active pharmaceutical compounds such as: sildenafil and two of its analogues, tadalafil and phenolphthalein. The occurence of phenolphthalein could be the reason for the non-relevant results obtained by FTIR method in some samples. 91% of the adulterated herbal food supplements were originating from China. Conclusion: The results of this screening highlighted the necessity for an accurate analysis of all alleged herbal aphrodisiacs on the Romanian market. This is a first such a screening analysis carried out on herbal food supplements with sexual enhancement claims.
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
NASA Astrophysics Data System (ADS)
Wang, Xiaohua
The coupling resulting from the mutual influence of material thermal and mechanical parameters is examined in the thermal stress analysis of a multilayered isotropic composite cylinder subjected to sudden axisymmetric external and internal temperature. The method of complex frequency response functions together with the Fourier transform technique is utilized. Because the coupling parameters for some composite materials, such as carbon-carbon, are very small, the effect of coupling is neglected in the orthotropic thermal stress analysis. The stress distributions in multilayered orthotropic cylinders subjected to sudden axisymmetric temperature loading combined with dynamic pressure as well as asymmetric temperature loading are also obtained. The method of Fourier series together with the Laplace transform is utilized in solving the heat conduction equation and thermal stress analysis. For brittle materials, like carbon-carbon composites, the strength variability is represented by two or three parameter Weibull distributions. The 'weakest link' principle which takes into account both the carbon-carbon composite cylinders. The complex frequency response analysis is performed on a multilayered orthotropic cylinder under asymmetrical thermal load. Both deterministic and random thermal stress and reliability analyses can be based on the results of this frequency response analysis. The stress and displacement distributions and reliability of rocket motors under static or dynamic line loads are analyzed by an elasticity approach. Rocket motors are modeled as long hollow multilayered cylinders with an air core, a thick isotropic propellant inner layer and a thin orthotropic kevlar-epoxy case. The case is treated as a single orthotropic layer or a ten layered orthotropic structure. Five material properties and the load are treated as random variable with normal distributions when the reliability of the rocket motor is analyzed by the first-order, second-moment method (FOSM).
NASA Astrophysics Data System (ADS)
Bause, Markus
2008-02-01
In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campillo, M.; Valiente, M.; Lacharmoise, P. D.
Hydroxyapatite is the main mineral component of bones and teeth. Fluorapatite, a bioceramic that can be obtained from hydroxyapatite by chemical substitution of the hydroxide ions with fluoride, exhibits lower mineral solubility and larger mechanical strength. Despite the widespread use of fluoride against caries, a reliable technique for unambiguous assessment of fluoridation in in vitro tests is still lacking. Here we present a method to probe fluorapatite formation in fluoridated hydroxyapatite by combining Raman scattering with thermal annealing. In synthetic minerals, we found that effectively fluoride substituted hydroxyapatite transforms into fluorapatite only after heat treatment, due to the high activationmore » energy for this first order phase transition.« less
On the assessment of hydroxyapatite fluoridation by means of Raman scattering
NASA Astrophysics Data System (ADS)
Campillo, M.; Lacharmoise, P. D.; Reparaz, J. S.; Goñi, A. R.; Valiente, M.
2010-06-01
Hydroxyapatite is the main mineral component of bones and teeth. Fluorapatite, a bioceramic that can be obtained from hydroxyapatite by chemical substitution of the hydroxide ions with fluoride, exhibits lower mineral solubility and larger mechanical strength. Despite the widespread use of fluoride against caries, a reliable technique for unambiguous assessment of fluoridation in in vitro tests is still lacking. Here we present a method to probe fluorapatite formation in fluoridated hydroxyapatite by combining Raman scattering with thermal annealing. In synthetic minerals, we found that effectively fluoride substituted hydroxyapatite transforms into fluorapatite only after heat treatment, due to the high activation energy for this first order phase transition.
The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability
NASA Astrophysics Data System (ADS)
Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing
2018-01-01
Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.
New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods
NASA Astrophysics Data System (ADS)
S Saha, Ray
2016-04-01
In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.
A Multi-level Fuzzy Evaluation Method for Smart Distribution Network Based on Entropy Weight
NASA Astrophysics Data System (ADS)
Li, Jianfang; Song, Xiaohui; Gao, Fei; Zhang, Yu
2017-05-01
Smart distribution network is considered as the future trend of distribution network. In order to comprehensive evaluate smart distribution construction level and give guidance to the practice of smart distribution construction, a multi-level fuzzy evaluation method based on entropy weight is proposed. Firstly, focus on both the conventional characteristics of distribution network and new characteristics of smart distribution network such as self-healing and interaction, a multi-level evaluation index system which contains power supply capability, power quality, economy, reliability and interaction is established. Then, a combination weighting method based on Delphi method and entropy weight method is put forward, which take into account not only the importance of the evaluation index in the experts’ subjective view, but also the objective and different information from the index values. Thirdly, a multi-level evaluation method based on fuzzy theory is put forward. Lastly, an example is conducted based on the statistical data of some cites’ distribution network and the evaluation method is proved effective and rational.
Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.
NASA Astrophysics Data System (ADS)
Smagin, A. V.; Dolgikh, A. V.; Karelin, D. V.
2016-04-01
The results of quantitative assessment and modeling of carbon dioxide emission from urban pedolithosediments (cultural layer) in the central part of Velikii Novgorod are discussed. At the first stages after the exposure of the cultural layer to the surface in archaeological excavations, very high CO2 emission values reaching 10-15 g C/(m2 h) have been determined. These values exceed the normal equilibrium emission from the soil surface by two orders of magnitude. However, they should not be interpreted as indications of the high biological activity of the buried urban sediments. A model based on physical processes shows that the measured emission values can be reliably explained by degassing of the soil water and desorption of gases from the urban sediments. This model suggests the diffusion mechanism of the transfer of carbon dioxide from the cultural layer into the atmosphere; in addition, it includes the equations to describe nonequilibrium interphase interactions (sorption-desorption and dissolution-degassing of CO2) with the first-order kinetics. With the use of statistically reliable data on physical parameters—the effective diffusion coefficient as dependent on the aeration porosity, the effective solubility, the Henry constant for the CO2 sorption, and the kinetic constants of the CO2 desorption and degassing of the soil solution—this model reproduces the experimental data on the dynamics of CO2 emission from the surface of the exposed cultural layer obtained by the static chamber method.
AlKhalidi, Bashar A; Shtaiwi, Majed; AlKhatib, Hatim S; Mohammad, Mohammad; Bustanji, Yasser
2008-01-01
A fast and reliable method for the determination of repaglinide is highly desirable to support formulation screening and quality control. A first-derivative UV spectroscopic method was developed for the determination of repaglinide in tablet dosage form and for dissolution testing. First-derivative UV absorbance was measured at 253 nm. The developed method was validated for linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ) in comparison to the U.S. Pharmacopeia (USP) column high-performance liquid chromatographic (HPLC) method. The first-derivative UV spectrophotometric method showed excellent linearity [correlation coefficient (r) = 0.9999] in the concentration range of 1-35 microg/mL and precision (relative standard deviation < 1.5%). The LOD and LOQ were 0.23 and 0.72 microg/mL, respectively, and good recoveries were achieved (98-101.8%). Statistical comparison of results of the first-derivative UV spectrophotometric and the USP HPLC methods using the t-test showed that there was no significant difference between the 2 methods. Additionally, the method was successfully used for the dissolution test of repaglinide and was found to be reliable, simple, fast, and inexpensive.
FNAC: its role, limitations and perspective in the preoperative diagnosis of breast cancer.
Zagorianakou, P; Fiaccavento, S; Zagorianakou, N; Makrydimas, G; Stefanou, D; Agnantis, N J
2005-01-01
Fine-needle aspiration cytology (FNAC) was first described and performed in 1930. Thirty years later, it gained acceptance first in Europe and about a decade later in North America. The method is generally considered as a rapid, reliable, safe diagnostic tool to distinguish non-neoplastic from neoplastic breast lesions. In developed countries, in the last 20 years, mammographic screening programmes, which have been used extensively, are designed to detect the earliest possible breast cancer. The FNAC report is extremely important because it gives the necessary information for the management of patients, in order to proceed with more invasive diagnostic methods or surgical treatment, and to decide what kind of operation to perform. In the preoperative phase, FNAC has taken a fundamental role of both palpable and nonpalpable lesions, using ultrasound or stereotactic guidance. New developed techniques, breast biopsy instrumentation (ABBI) and mammotome have the advantage of complete removal of breast lesions, but this is not possible in all the examined cases. In developing countries, economical restrictions, low budget for health care and screening programmes put the patients at a disadvantage because of the high cost of sophisticated diagnostic methods, thus we recommend that FNAC be used as a routine diagnostic method because of its low cost compared with the others and this policy maximizes the availability of health care to women with breast cancer. We conclude that FNAC plays an important and essential role in the management of patients with breast lesions and also offers a great potential for prediction of patient outcome, disease response to therapy and assessment of risk of developing breast cancer. The reliability and efficiency of the method depends on the quality of the samples and the experience of the medical staff that performs the aspiration.
Vaingankar, Janhavi Ajit; Subramaniam, Mythily; Abdin, Edimansyah; Picco, Louisa; Chua, Boon Yiang; Eng, Goi Khia; Sambasivam, Rajeswari; Shafie, Saleha; Zhang, Yunjue; Chong, Siow Ann
2014-06-01
The 47-item positive mental health (PMH) instrument measures the level of PMH in multiethnic adult Asian populations. This study aimed to (1) develop a short PMH instrument and (2) establish its validity and reliability among the adult Singapore population. Two separate studies were conducted among adult community-dwelling Singapore residents of Chinese, Malay or Indian ethnicity where participants completed self-administered questionnaires. In the first study, secondary data analysis was conducted using confirmatory factor analysis (CFA) to shorten the PMH instrument. In the second study, the newly developed short PMH instrument and other scales were administered to 201 residents to establish its factor structure, validity and reliability. A 20-item short PMH instrument fulfilling a higher-order six-factor structure was developed following secondary analysis. The mean age of the participants in the second study was 41 years and about 53% were women. One item with poor factor loading was further removed to generate a 19-item version of the PMH instrument. CFA demonstrated a first-order six-factor model of the short PMH instrument. The PMH-19 instrument and its subscales fulfilled criterion validity hypotheses. Internal consistency and test-retest reliability of the PMH-19 instrument were high (Cronbach's α coefficient = 0.87; intraclass correlation coefficient = 0.93, respectively). The 19-item PMH instrument is multidimensional, valid and reliable, and most importantly, with its reduced administration time, the short PMH instrument can be used to measure and evaluate PMH in Asian communities.
ERIC Educational Resources Information Center
Yener, Özen
2014-01-01
In this research, we aim to develop a 5-point likert scale and use it in an experimental application by performing its validity and reliability in order to measure the will perception of teenagers and adults. With this aim, firstly the items have been taken either in the same or changed way from various scales and an item pool including 61 items…
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor
2013-06-01
Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Hou, Kun-Mean; Zhang, Zhan
2017-01-01
Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem. PMID:29120357
Zhou, Peng; Zuo, Decheng; Hou, Kun-Mean; Zhang, Zhan
2017-11-09
Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
ERIC Educational Resources Information Center
Mashburn, Andrew J.; Meyer, J. Patrick; Allen, Joseph P.; Pianta, Robert C.
2014-01-01
Observational methods are increasingly being used in classrooms to evaluate the quality of teaching. Operational procedures for observing teachers are somewhat arbitrary in existing measures and vary across different instruments. To study the effect of different observation procedures on score reliability and validity, we conducted an experimental…
Exploring experiential value in online mobile gaming adoption.
Okazaki, Shintaro
2008-10-01
Despite the growing importance of the online mobile gaming industry, little research has been undertaken to explain why consumers engage in this ubiquitous entertainment. This study attempts to develop an instrument to measure experiential value in online mobile gaming adoption. The proposed scale consists of seven first-order factors of experiential value: intrinsic enjoyment, escapism, efficiency, economic value, visual appeal, perceived novelty, and perceived risklessness. The survey obtained 164 usable responses from Japanese college students. The empirical data fit our first-order model well, indicating a high level of reliability as well as convergent and discriminant validity. The single second-order model also shows an acceptable model fit.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Krutulyte, Grazina; Kimtys, Algimantas; Krisciūnas, Aleksandras
2003-01-01
The purpose of this study was to examine whether two different physiotherapy regimes caused any differences in outcome in the rehabilitation after stroke. We examined 240 patients with stroke. Examination was carried out at the Rehabilitation Center of Kaunas Second Clinical Hospital. Patients were divided into 2 groups: Bobath method was applied to the first (I) group (n=147), motor relearning program (MRP) method was applied to the second (II) group (n=93). In every group of patients we established samples according to sex, age, hospitalization to rehab unit as occurrence of CVA degree of disorder (hemiplegia, hemiparesis). The mobility of patients was evaluated according to European Federation for Research in Rehabilitation (EFRR) scale. Activities of daily living were evaluated by Barthel index. Analyzed groups were evaluated before physical therapy. When preliminary analysis was carried out it proved no statically reliable differences between analyzed groups (reliability 95%). The same statistical analysis was carried out after physical therapy. The results of differences between patient groups were compared using chi(2) method. Bobath method was applied working with the first group of patients. The aim of the method is to improve quality of the affected body side's movements in order to keep both sides working as harmoniously as possible. While applying this method at work, physical therapist guides patient's body on key-points, stimulating normal postural reactions, and training normal movement pattern. MRP method was used while working with the second group patients. This method is based on movement science, biomechanics and training of functional movement. Program is based on idea that movement pattern shouldn't be trained; it must be relearned. CONCLUSION. This study indicates that physiotherapy with task-oriented strategies represented by MRP, is preferable to physiotherapy with facilitation/inhibition strategies, such the Bobath programme, in the rehabilitation of stroke patients (p< 0.05).
McCreesh, Karen M; Crotty, James M; Lewis, Jeremy S
2015-03-01
Narrowing of the subacromial space has been noted as a common feature of rotator cuff (RC) tendinopathy. It has been implicated in the development of symptoms and forms the basis for some surgical and rehabilitation approaches. Various radiological methods have been used to measure the subacromial space, which is represented by a two-dimensional measurement of acromiohumeral distance (AHD). A reliable method of measurement could be used to assess the impact of rehabilitation or surgical interventions for RC tendinopathy; however, there are no published reviews assessing the reliability of AHD measurement. The aim of this review was to systematically assess the evidence for the intrarater and inter-rater reliability of radiological methods of measuring AHD, in order to identify the most reliable method for use in RC tendinopathy. An electronic literature search was carried out and studies describing the reliability of any radiological method of measuring AHD in either healthy or RC tendinopathy groups were included. Eighteen studies met the inclusion criteria and were appraised by two reviewers using the Quality Appraisal for reliability Studies checklist. Eight studies were deemed to be of high methodological quality. Study weaknesses included lack of tester blinding, inadequate description of tester experience, lack of inclusion of symptomatic populations, poor reporting of statistical methods and unclear diagnosis. There was strong evidence for the reliability of ultrasound for measuring AHD, with moderate evidence for MRI and CT measures and conflicting evidence for radiographic methods. Overall, there was lack of research in RC tendinopathy populations, with only six studies including participants with shoulder pain. The results support the reliability of ultrasound and CT or MRI for the measurement of AHD; however, more studies in symptomatic populations are required. The reliability of AHD measurement using radiographs has not been supported by the studies reviewed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Method of ultrasonic measurement of texture
Thompson, R. Bruce; Smith, John F.; Lee, Seung S.; Li, Yan
1993-10-12
A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet.
Method of ultrasonic measurement of texture
Thompson, R.B.; Smith, J.F.; Lee, S.S.; Taejon Ch'ungmam; Yan Li.
1993-10-12
A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figures.
A rapid approach for automated comparison of independently derived stream networks
Stanislawski, Larry V.; Buttenfield, Barbara P.; Doumbouya, Ariel T.
2015-01-01
This paper presents an improved coefficient of line correspondence (CLC) metric for automatically assessing the similarity of two different sets of linear features. Elevation-derived channels at 1:24,000 scale (24K) are generated from a weighted flow-accumulation model and compared to 24K National Hydrography Dataset (NHD) flowlines. The CLC process conflates two vector datasets through a raster line-density differencing approach that is faster and more reliable than earlier methods. Methods are tested on 30 subbasins distributed across different terrain and climate conditions of the conterminous United States. CLC values for the 30 subbasins indicate 44–83% of the features match between the two datasets, with the majority of the mismatching features comprised of first-order features. Relatively lower CLC values result from subbasins with less than about 1.5 degrees of slope. The primary difference between the two datasets may be explained by different data capture criteria. First-order, headwater tributaries derived from the flow-accumulation model are captured more comprehensively through drainage area and terrain conditions, whereas capture of headwater features in the NHD is cartographically constrained by tributary length. The addition of missing headwaters to the NHD, as guided by the elevation-derived channels, can substantially improve the scientific value of the NHD.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
First Principle Predictions of Isotopic Shifts in H2O
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Kwak, Dochan (Technical Monitor)
2002-01-01
We compute isotope independent first and second order corrections to the Born-Oppenheimer approximation for water and use them to predict isotopic shifts. For the diagonal correction, we use icMRCI wavefunctions and derivatives with respect to mass dependent, internal coordinates to generate the mass independent correction functions. For the non-adiabatic correction, we use scaled SCF/CIS wave functions and a generalization of the Handy method to obtain mass independent correction functions. We find that including the non-adiabatic correction gives significantly improved results compared to just including the diagonal correction when the Born-Oppenheimer potential energy surface is optimized for H2O-16. The agreement with experimental results for deuterium and tritium containing isotopes is nearly as good as our best empirical correction, however, the present correction is expected to be more reliable for higher, uncharacterized levels.
Sim, K S; Yeap, Z X; Tso, C P
2016-11-01
An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Purposes and methods of scoring earthquake forecasts
NASA Astrophysics Data System (ADS)
Zhuang, J.
2010-12-01
There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
2010-01-01
Background The primary aim of this study was to develop and psychometrically test a Greek-language instrument for measuring satisfaction with home care. The first empirical evidence about the level of satisfaction with these services in Greece is also provided. Methods The questionnaire resulted from literature search, on-site observation and cognitive interviews. It was applied in 2006 to a sample of 201 enrollees of five home care programs in the city of Thessaloniki and contains 31 items that measure satisfaction with individual service attributes and are expressed on a 5-point Likert scale. The latter has been usually considered in practice as an interval scale, although it is in principle ordinal. We thus treated the variable as an ordinal one, but also employed the traditional approach in order to compare the findings. Our analysis was therefore based on ordinal measures such as the polychoric correlation, Kendall's Tau b coefficient and ordinal Cronbach's alpha. Exploratory factor analysis was followed by an assessment of internal consistency reliability, test-retest reliability, construct validity and sensitivity. Results Analyses with ordinal and interval scale measures produced in essence very similar results and identified four multi-item scales. Three of these were found to be reliable and valid: socioeconomic change, staff skills and attitudes and service appropriateness. A fourth dimension -service planning- had lower internal consistency reliability and yet very satisfactory test-retest reliability, construct validity and floor and ceiling effects. The global satisfaction scale created was also quite reliable. Overall, participants were satisfied -yet not very satisfied- with home care services. More room for improvement seems to exist for the socio-economic and planning aspects of care and less for staff skills and attitudes and appropriateness of provided services. Conclusions The methods developed seem to be a promising tool for the measurement of home care satisfaction in Greece. PMID:20602759
Reliability of a survey tool for measuring consumer nutrition environment in urban food stores.
Hosler, Akiko S; Dharssi, Aliza
2011-01-01
Despite the increase in the volume and importance of food environment research, there is a general lack of reliable measurement tools. This study presents the development and reliability assessment of a tool for measuring consumer nutrition environment in urban food stores. Cross-sectional design. A racially diverse downtown portion (6 ZIP code areas) in Albany, New York. A sample of 39 food stores was visited by our research team in 2009 to 2010. These stores were randomly selected from 123 eligible food stores identified through multiple government lists and ground-truthing. The Food Retail Outlet Survey Tool was developed to assess the presence of selected food and nonfood items, placement, milk prices, physical characteristics of the store, policy implementation, and advertisements on outside windows. For in-store items, agreement of observations between experienced and lightly trained surveyors was assessed. For window advertisement assessments, inter-method agreement (on-site sketch vs digital photo), and inter-rater agreement (both on-site) among lightly trained surveyors were evaluated. Percent agreement, Kappa, and prevalence-adjusted bias-adjusted kappa were calculated for in-store observations. Interclass correlation coefficients were calculated for window observations. Twenty-seven of the 47 in-store items had 100% agreement. The prevalence-adjusted bias-adjusted kappa indicated excellent agreement (≥0.90) on all items, except aisle width (0.74) and dark-green/orange colored fresh vegetables (0.85). The store type (nonconvenience store), the order of visits (first half), and the time to complete survey (>10 minutes) were associated with lower reliability in these 2 items. Both the inter-method and inter-rater agreements for window advertisements were uniformly high (intraclass correlation coefficient ranged 0.94-1.00), indicating high reliability. The Food Retail Outlet Survey Tool is a reliable tool for quickly measuring consumer nutrition environment. It can be effectively used by an individual who attended a 30-minute group briefing and practiced with 3 to 4 stores.
Mkentane, K.; Gumedze, F.; Ngoepe, M.; Davids, L. M.; Khumalo, N. P.
2017-01-01
Introduction Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). Materials and methods After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Results Each rater classified 480 hairs on each occasion. No rater classified any volunteer’s 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Conclusions Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine. PMID:28570555
ERIC Educational Resources Information Center
Stringfield, Sam; Reynolds, David; Schaffer, Eugene
2016-01-01
This chapter presents data from a 15-year, mixed-methods school improvement effort. The High Reliability Schools (HRS) reform made use of previous research on school effects and on High Reliability Organizations (HROs). HROs are organizations in various parts of our cultures that are required to operate successfully "the first time, every…
Reliability evaluation of microgrid considering incentive-based demand response
NASA Astrophysics Data System (ADS)
Huang, Ting-Cheng; Zhang, Yong-Jun
2017-07-01
Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
Development and Validation of a Measure of Quality of Life for the Young Elderly in Sri Lanka.
de Silva, Sudirikku Hennadige Padmal; Jayasuriya, Anura Rohan; Rajapaksa, Lalini Chandika; de Silva, Ambepitiyawaduge Pubudu; Barraclough, Simon
2016-01-01
Sri Lanka has one of the fastest aging populations in the world. Measurement of quality of life (QoL) in the elderly needs instruments developed that encompass the sociocultural settings. An instrument was developed to measure QoL in the young elderly in Sri Lanka (QLI-YES), using accepted methods to generate and reduce items. The measure was validated using a community sample. Construct, criterion and predictive validity and reliability were tested. A first-order model of 24 items with 6 domains was found to have good fit indices (CMIN/df = 1.567, RMR = 0.05, CFI = 0.95, and RMSEA = 0.053). Both criterion and predictive validity were demonstrated. Good internal consistency reliability (Cronbach's α = 0.93) was shown. The development of the QLI-YES using a societal perspective relevant to the social and cultural beliefs has resulted in a robust and valid instrument to measure QoL for the young elderly in Sri Lanka. © 2015 APJPH.
Development and Validation of a Measure of Quality of Life for the Young Elderly in Sri Lanka
de Silva, Sudirikku Hennadige Padmal; Jayasuriya, Anura Rohan; Rajapaksa, Lalini Chandika; de Silva, Ambepitiyawaduge Pubudu; Barraclough, Simon
2016-01-01
Sri Lanka has one of the fastest aging populations in the world. Measurement of quality of life (QoL) in the elderly needs instruments developed that encompass the sociocultural settings. An instrument was developed to measure QoL in the young elderly in Sri Lanka (QLI-YES), using accepted methods to generate and reduce items. The measure was validated using a community sample. Construct, criterion and predictive validity and reliability were tested. A first-order model of 24 items with 6 domains was found to have good fit indices (CMIN/df = 1.567, RMR = 0.05, CFI = 0.95, and RMSEA = 0.053). Both criterion and predictive validity were demonstrated. Good internal consistency reliability (Cronbach’s α = 0.93) was shown. The development of the QLI-YES using a societal perspective relevant to the social and cultural beliefs has resulted in a robust and valid instrument to measure QoL for the young elderly in Sri Lanka. PMID:26712893
Dümichen, Erik; Eisentraut, Paul; Bannick, Claus Gerhard; Barthel, Anne-Kathrin; Senz, Rainer; Braun, Ulrike
2017-05-01
In order to determine the relevance of microplastic particles in various environmental media, comprehensive investigations are needed. However, no analytical method exists for fast identification and quantification. At present, optical spectroscopy methods like IR and RAMAN imaging are used. Due to their time consuming procedures and uncertain extrapolation, reliable monitoring is difficult. For analyzing polymers Py-GC-MS is a standard method. However, due to a limited sample amount of about 0.5 mg it is not suited for analysis of complex sample mixtures like environmental samples. Therefore, we developed a new thermoanalytical method as a first step for identifying microplastics in environmental samples. A sample amount of about 20 mg, which assures the homogeneity of the sample, is subjected to complete thermal decomposition. The specific degradation products of the respective polymer are adsorbed on a solid-phase adsorber and subsequently analyzed by thermal desorption gas chromatography mass spectrometry. For certain identification, the specific degradation products for the respective polymer were selected first. Afterwards real environmental samples from the aquatic (three different rivers) and the terrestrial (bio gas plant) systems were screened for microplastics. Mainly polypropylene (PP), polyethylene (PE) and polystyrene (PS) were identified for the samples from the bio gas plant and PE and PS from the rivers. However, this was only the first step and quantification measurements will follow. Copyright © 2017 Elsevier Ltd. All rights reserved.
Task Decomposition in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids; Joe, Jeffrey Clark
2014-06-01
In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less
Llamas, José M.; Cibrián, Rosa; Gandia, José L.; Paredes, Vanessa
2012-01-01
Objectives: Cone Beam Computerized Tomography (CBCT) allows the possibility of modifying some of the diagnostic tools used in orthodontics, such as cephalometry. The first step must be to study the characteristics of these devices in terms of accuracy and reliability of the most commonly used landmarks. The aims were 1- To assess intra and inter-observer reliability in the location of anatomical landmarks belonging to hard tissues of the skull in images taken with a CBCT device, 2- To determine which of those landmarks are more vs. less reliable and 3- To introduce planes of reference so as to create cephalometric analyses appropriated to the 3D reality. Study design: Fifteen patients who had a CBCT (i-CAT®) as a diagnostic register were selected. To assess the reproducibility on landmark location and the differences in the measurements of two observers at different times, 41 landmarks were defined on the three spatial axes (X,Y,Z) and located. 3.690 measurements were taken and, as each determination has 3 coordinates, 11.070 data were processed with SPSS® statistical package. To discover the reproducibility of the method on landmark location, an ANOVA was undertaken using two variation factors: time (t1, t2 and t3) and observer (Ob1 and Ob2) for each axis (X, Y and Z) and landmark. The order of the CBCT scans submitted to the observers (Ob1, Ob2) at t1, t2, and t3, were different and randomly allocated. Multiple comparisons were undertaken using the Bonferroni test. The intra- and inter-examiner ICC´s were calculated. Results: Intra- and inter-examiner reliability was high, both being ICC ≥ 0.99, with the best frequency on axis Z. Conclusions: The most reliable landmarks were: Nasion, Sella, Basion, left Porion, point A, anterior nasal spine, Pogonion, Gnathion, Menton, frontozygomatic sutures, first lower molars and upper and lower incisors. Those with less reliability were the supraorbitals, right zygion and posterior nasal spine. Key words:Cone Beam Computed Tomography, cephalometry, landmark, orthodontics, reliability. PMID:22322503
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilpatrick, Brian M.; Tucker, Gregory S.; Lewis, Nikole K.
2017-01-01
We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for themore » intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.« less
NASA Astrophysics Data System (ADS)
Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.
2017-01-01
We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.
Uncertainty characterization approaches for risk assessment of DBPs in drinking water: a review.
Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James
2009-04-01
The management of risk from disinfection by-products (DBPs) in drinking water has become a critical issue over the last three decades. The areas of concern for risk management studies include (i) human health risk from DBPs, (ii) disinfection performance, (iii) technical feasibility (maintenance, management and operation) of treatment and disinfection approaches, and (iv) cost. Human health risk assessment is typically considered to be the most important phase of the risk-based decision-making or risk management studies. The factors associated with health risk assessment and other attributes are generally prone to considerable uncertainty. Probabilistic and non-probabilistic approaches have both been employed to characterize uncertainties associated with risk assessment. The probabilistic approaches include sampling-based methods (typically Monte Carlo simulation and stratified sampling) and asymptotic (approximate) reliability analysis (first- and second-order reliability methods). Non-probabilistic approaches include interval analysis, fuzzy set theory and possibility theory. However, it is generally accepted that no single method is suitable for the entire spectrum of problems encountered in uncertainty analyses for risk assessment. Each method has its own set of advantages and limitations. In this paper, the feasibility and limitations of different uncertainty analysis approaches are outlined for risk management studies of drinking water supply systems. The findings assist in the selection of suitable approaches for uncertainty analysis in risk management studies associated with DBPs and human health risk.
Pérez-Lozano, P; García-Montoya, E; Orriols, A; Miñarro, M; Ticó, J R; Suñé-Negre, J M
2005-10-04
A new HPLC-RP method has been developed and validated for the simultaneous determination of benzocaine, two preservatives (propylparaben (nipasol) and benzyl alcohol) and degradation products of benzocaine in a semisolid pharmaceutical dosage form (benzocaine gel). The method uses a Nucleosil 120 C18 column and gradient elution. The mobile phase consisted of a mixture of methanol and glacial acetic acid (10%, v/v) at different proportion according to a time-schedule programme, pumped at a flow rate of 2.0 ml min(-1). The DAD detector was set at 258 nm. The validation study was carried out fulfilling the ICH guidelines in order to prove that the new analytical method, meets the reliability characteristics, and these characteristics showed the capacity of analytical method to keep, throughout the time, the fundamental criteria for validation: selectivity, linearity, precision, accuracy and sensitivity. The method was applied during the quality control of benzocaine gel in order to quantify the drug (benzocaine), preservatives and degraded products and proved to be suitable for rapid and reliable quality control method.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain.
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-02-05
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.
A third-order approximation method for three-dimensional wheel-rail contact
NASA Astrophysics Data System (ADS)
Negretti, Daniele
2012-03-01
Multibody train analysis is used increasingly by railway operators whenever a reliable and time-efficient method to evaluate the contact between wheel and rail is needed; particularly, the wheel-rail contact is one of the most important aspects that affects a reliable and time-efficient vehicle dynamics computation. The focus of the approach proposed here is to carry out such tasks by means of online wheel-rail elastic contact detection. In order to improve efficiency and save time, a main analytical approach is used for the definition of wheel and rail surfaces as well as for contact detection, then a final numerical evaluation is used to locate contact. The final numerical procedure consists in finding the zeros of a nonlinear function in a single variable. The overall method is based on the approximation of the wheel surface, which does not influence the contact location significantly, as shown in the paper.
Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul
2017-01-01
Background The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. Methods The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version’s validity. Composite reliability was tested to determine the scale’s reliability. Results All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. Conclusion The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6), for the Malaysian population. PMID:28951688
Comparison in the quality of distractors in three and four options type of multiple choice questions
Rahma, Nourelhouda A A; Shamad, Mahdi M A; Idris, Muawia E A; Elfaki, Omer Abdelgadir; Elfakey, Walyedldin E M; Salih, Karimeldin M A
2017-01-01
Introduction The number of distractors needed for high quality multiple choice questions (MCQs) will be determined by many factors. These include firstly whether English language is their mother tongue or a foreign language; secondly whether the instructors who construct the questions are experts or not; thirdly the time spent on constructing the options is also an important factor. It has been observed by Tarrant et al that more time is often spent on constructing questions than on tailoring sound, reliable, and valid distractors. Objectives Firstly, to investigate the effects of reducing the number of options on psychometric properties of the item. Secondly, to determine the frequency of functioning distractors among three or four options in the MCQs examination of the dermatology course in University of Bahri, College of Medicine. Materials and methods This is an experimental study which was performed by means of a dermatology exam, MCQs type. Forty MCQs, with one correct answer for each question were constructed. Two sets of this exam paper were prepared: in the first one, four options were given, including one key answer and three distractors. In the second set, one of the three distractors was deleted randomly, and the sequence of the questions was kept in the same order. Any distracter chosen by less than 5% of the students was regarded as non-functioning. Kuder-Richardson Formula 20 (Kr-20) measures the internal consistency and reliability of an examination with an acceptable range 0.8–1.0. Chi square test was used to compare the distractors in the two exams. Results A significant difference was observed in discrimination and difficulty indexes for both sets of MCQs. More distractors were non-functional for set one (of four options), but slightly more reliable. The reliability (Kr-20) was slightly higher for set one (of four options). The average marks in option three and four were 34.163 and 33.140, respectively. Conclusion Compared to set 1 (four options), set 2 (of three options) was more discriminating and associated with low difficulty index but its reliability was low. PMID:28442942
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
The cosmic QCD phase transition with dense matter and its gravitational waves from holography
NASA Astrophysics Data System (ADS)
Ahmadvand, M.; Bitaghsir Fadafan, K.
2018-04-01
Consistent with cosmological constraints, there are scenarios with the large lepton asymmetry which can lead to the finite baryochemical potential at the cosmic QCD phase transition scale. In this paper, we investigate this possibility in the holographic models. Using the holographic renormalization method, we find the first order Hawking-Page phase transition, between the Reissner-Nordström AdS black hole and thermal charged AdS space, corresponding to the de/confinement phase transition. We obtain the gravitational wave spectra generated during the evolution of bubbles for a range of the bubble wall velocity and examine the reliability of the scenarios and consequent calculations by gravitational wave experiments.
Review of evaluation on ecological carrying capacity: The progress and trend of methodology
NASA Astrophysics Data System (ADS)
Wang, S. F.; Xu, Y.; Liu, T. J.; Ye, J. M.; Pan, B. L.; Chu, C.; Peng, Z. L.
2018-02-01
The ecological carrying capacity (ECC) has been regarded as an important reference to indicate the level of regional sustainable development since the very beginning of twenty-first century. By a brief review of the main progress in ECC evaluation methodologies in recent five years, this paper systematically discusses the features and differences of these methods and expounds the current states and future development trend of ECC methodology. The result shows that further exploration in terms of the dynamic, comprehensive and intelligent assessment technologies needs to be provided in order to form a unified and scientific ECC methodology system and to produce a reliable basis for environmental-economic decision-makings.
The quality of orthodontic practice websites.
Parekh, J; Gill, D S
2014-05-01
To evaluate orthodontic practice websites for the reliability of information presented, accessibility, usability for patients and compliance to General Dental Council (GDC) regulations on ethical advertising. World Wide Web. The term 'orthodontic practice' was entered into three separate search engines. The 30 websites from the UK were selected and graded according to the LIDA tool (a validated method of evaluating healthcare websites) for accessibility, usability of the website and reliability of information on orthodontic treatment. The websites were then evaluated against the GDC's Principles for ethical advertising in nine different criteria. On average, each website fulfilled six out of nine points of the GDC's criteria, with inclusion of a complaints policy being the most poorly fulfilled criteria. The mean LIDA score (a combination of usability, reliability and accessibility) was 102/144 (standard deviation 8.38). The websites scored most poorly on reliability (average 43% SD 11.7), with no single website reporting a clear, reliable method of content production. Average accessibility was 81% and usability 73%. In general, websites did not comply with GDC guidelines on ethical advertising. Furthermore, practitioners should consider reporting their method of information production, particularly when making claims about efficiency and speed of treatment in order to improve reliability.
A GIS-based extended fuzzy multi-criteria evaluation for landslide susceptibility mapping
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Shadman Roodposhti, Majid; Jankowski, Piotr; Blaschke, Thomas
2014-12-01
Landslide susceptibility mapping (LSM) is making increasing use of GIS-based spatial analysis in combination with multi-criteria evaluation (MCE) methods. We have developed a new multi-criteria decision analysis (MCDA) method for LSM and applied it to the Izeh River basin in south-western Iran. Our method is based on fuzzy membership functions (FMFs) derived from GIS analysis. It makes use of nine causal landslide factors identified by local landslide experts. Fuzzy set theory was first integrated with an analytical hierarchy process (AHP) in order to use pairwise comparisons to compare LSM criteria for ranking purposes. FMFs were then applied in order to determine the criteria weights to be used in the development of a landslide susceptibility map. Finally, a landslide inventory database was used to validate the LSM map by comparing it with known landslides within the study area. Results indicated that the integration of fuzzy set theory with AHP produced significantly improved accuracies and a high level of reliability in the resulting landslide susceptibility map. Approximately 53% of known landslides within our study area fell within zones classified as having "very high susceptibility", with the further 31% falling into zones classified as having "high susceptibility".
A GIS-based extended fuzzy multi-criteria evaluation for landslide susceptibility mapping
Feizizadeh, Bakhtiar; Shadman Roodposhti, Majid; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
Landslide susceptibility mapping (LSM) is making increasing use of GIS-based spatial analysis in combination with multi-criteria evaluation (MCE) methods. We have developed a new multi-criteria decision analysis (MCDA) method for LSM and applied it to the Izeh River basin in south-western Iran. Our method is based on fuzzy membership functions (FMFs) derived from GIS analysis. It makes use of nine causal landslide factors identified by local landslide experts. Fuzzy set theory was first integrated with an analytical hierarchy process (AHP) in order to use pairwise comparisons to compare LSM criteria for ranking purposes. FMFs were then applied in order to determine the criteria weights to be used in the development of a landslide susceptibility map. Finally, a landslide inventory database was used to validate the LSM map by comparing it with known landslides within the study area. Results indicated that the integration of fuzzy set theory with AHP produced significantly improved accuracies and a high level of reliability in the resulting landslide susceptibility map. Approximately 53% of known landslides within our study area fell within zones classified as having “very high susceptibility”, with the further 31% falling into zones classified as having “high susceptibility”. PMID:26089577
Kim, Young Jae; Kim, Kwang Gi
2018-01-01
Existing drusen measurement is difficult to use in clinic because it requires a lot of time and effort for visual inspection. In order to resolve this problem, we propose an automatic drusen detection method to help clinical diagnosis of age-related macular degeneration. First, we changed the fundus image to a green channel and extracted the ROI of the macular area based on the optic disk. Next, we detected the candidate group using the difference image of the median filter within the ROI. We also segmented vessels and removed them from the image. Finally, we detected the drusen through Renyi's entropy threshold algorithm. We performed comparisons and statistical analysis between the manual detection results and automatic detection results for 30 cases in order to verify validity. As a result, the average sensitivity was 93.37% (80.95%~100%) and the average DSC was 0.73 (0.3~0.98). In addition, the value of the ICC was 0.984 (CI: 0.967~0.993, p < 0.01), showing the high reliability of the proposed automatic method. We expect that the automatic drusen detection helps clinicians to improve the diagnostic performance in the detection of drusen on fundus image.
A GIS-based extended fuzzy multi-criteria evaluation for landslide susceptibility mapping.
Feizizadeh, Bakhtiar; Shadman Roodposhti, Majid; Jankowski, Piotr; Blaschke, Thomas
2014-12-01
Landslide susceptibility mapping (LSM) is making increasing use of GIS-based spatial analysis in combination with multi-criteria evaluation (MCE) methods. We have developed a new multi-criteria decision analysis (MCDA) method for LSM and applied it to the Izeh River basin in south-western Iran. Our method is based on fuzzy membership functions (FMFs) derived from GIS analysis. It makes use of nine causal landslide factors identified by local landslide experts. Fuzzy set theory was first integrated with an analytical hierarchy process (AHP) in order to use pairwise comparisons to compare LSM criteria for ranking purposes. FMFs were then applied in order to determine the criteria weights to be used in the development of a landslide susceptibility map. Finally, a landslide inventory database was used to validate the LSM map by comparing it with known landslides within the study area. Results indicated that the integration of fuzzy set theory with AHP produced significantly improved accuracies and a high level of reliability in the resulting landslide susceptibility map. Approximately 53% of known landslides within our study area fell within zones classified as having "very high susceptibility", with the further 31% falling into zones classified as having "high susceptibility".
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
A Rewritable, Random-Access DNA-Based Storage System.
Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-18
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
A Rewritable, Random-Access DNA-Based Storage System
NASA Astrophysics Data System (ADS)
Tabatabaei Yazdi, S. M. Hossein; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica
2015-09-01
We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
Computationally efficient stochastic optimization using multiple realizations
NASA Astrophysics Data System (ADS)
Bayer, P.; Bürger, C. M.; Finkel, M.
2008-02-01
The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.
Duterme, Sophie; Vanhoof, Raymond; Vanderpas, Jean; Pierard, Denis; Huygen, Kris
2016-04-01
Report on the pitfalls of serodiagnosis of pertussis in Belgium for 2013 by the NRC Bordetella. Determine cases of acute infection using an anti-pertussis toxin (PT) IgG antibody ELISA. A total of 2471 serum samples were received. Clinical information on the duration of cough (at moment of blood sampling) is essential for a reliable interpretation of the results. In order to avoid false negative results, 213 samples for which this information was lacking were not tested. For a total of 2179 patients tested, 520 (23.9%) had antibody levels indicative of an acute infection, 261 (12%) samples were diagnosed as positive (indicative of a pertussis infection or vaccination during the last year), 143 (6.7%) samples were classified as doubtful and 752 (34,5%) (35.5%) were diagnosed as negative. The serodiagnosis of pertussis has limited value for the early diagnosis of the disease and PCR analysis on nasopharyngeal swabs is the method of choice during the first 2 weeks and always for young children <1 year old. For sera collected during the first 2 weeks with anti-PT levels below the threshold for acute infection, a second sample collected 2-3 weeks later is needed a definitive diagnosis. For 503 (23.0%) early samples, a second serum sample was requested but not provided. For 85 patients, for whom a second sample was received, 12.9% were eventually diagnosed as having an acute infection. In order to generate reliable serodiagnostic results for pertussis, serum samples should preferentially be collected 3 weeks after onset of symptoms.
Determining approximate age of digital images using sensor defects
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2011-02-01
The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.
Designing an experiment to measure cellular interaction forces
NASA Astrophysics Data System (ADS)
McAlinden, Niall; Glass, David G.; Millington, Owain R.; Wright, Amanda J.
2013-09-01
Optical trapping is a powerful tool in Life Science research and is becoming common place in many microscopy laboratories and facilities. The force applied by the laser beam on the trapped object can be accurately determined allowing any external forces acting on the trapped object to be deduced. We aim to design a series of experiments that use an optical trap to measure and quantify the interaction force between immune cells. In order to cause minimum perturbation to the sample we plan to directly trap T cells and remove the need to introduce exogenous beads to the sample. This poses a series of challenges and raises questions that need to be answered in order to design a set of effect end-point experiments. A typical cell is large compared to the beads normally trapped and highly non-uniform - can we reliably trap such objects and prevent them from rolling and re-orientating? In this paper we show how a spatial light modulator can produce a triple-spot trap, as opposed to a single-spot trap, giving complete control over the object's orientation and preventing it from rolling due, for example, to Brownian motion. To use an optical trap as a force transducer to measure an external force you must first have a reliably calibrated system. The optical trapping force is typically measured using either the theory of equipartition and observing the Brownian motion of the trapped object or using an escape force method, e.g. the viscous drag force method. In this paper we examine the relationship between force and displacement, as well as measuring the maximum displacement from equilibrium position before an object falls out of the trap, hence determining the conditions under which the different calibration methods should be applied.
Effective core potential calculations on small molecules containing transition metal atoms
NASA Astrophysics Data System (ADS)
Gropen, O.; Wahlgren, U.; Pettersson, L.
1982-04-01
A series of test calculations on diatomic oxides and hydrides of Sc, Ti, Cr, Ni and Zn have been carried out in order to test the reliability of some pseudopotential methods. Several different forms of some pseudopotential operators were used. Only the highest valence orbitals of each atomic symmetry were explicitly included in the calculations. The results indicate that there are problems associated with all the investigated operators particularly for the lighter transition elements. It is suggested that more reliable results may be obtained with pseudopotential methods using smaller cores.
A Novel Algorithm for Detecting Protein Complexes with the Breadth First Search
Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi
2014-01-01
Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
Quantifying complexity of financial short-term time series by composite multiscale entropy measure
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun
2015-05-01
It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.
Cheng, Jianhua; Dong, Jinlu; Landry, Rene; Chen, Daidai
2014-07-29
In order to improve the accuracy and reliability of micro-electro mechanical systems (MEMS) navigation systems, an orthogonal rotation method-based nine-gyro redundant MEMS configuration is presented. By analyzing the accuracy and reliability characteristics of an inertial navigation system (INS), criteria for redundant configuration design are introduced. Then the orthogonal rotation configuration is formed through a two-rotation of a set of orthogonal inertial sensors around a space vector. A feasible installation method is given for the real engineering realization of this proposed configuration. The performances of the novel configuration and another six configurations are comprehensively compared and analyzed. Simulation and experimentation are also conducted, and the results show that the orthogonal rotation configuration has the best reliability, accuracy and fault detection and isolation (FDI) performance when the number of gyros is nine.
Development and Psychometric Properties of Social Exclusion Questionnaire for Iranian Divorced Women
ZAREI, Fatemeh; SOLHI, Mahnaz; MERGHATI-KHOEI, Effat; TAGHDISI, Mohammad Hossein; SHOJAEIZADEH, Davoud; TAKET, Ann Rosemary; MASOOMI, Razieh; NEDJAT, Saharnaz
2017-01-01
Background: Divorce, especially in women, could be assessed from socio-cultural perspective as well as psychological viewpoint. This assessment requires cultural adopted as well as valid and reliable questionnaire. This study aimed to develop and assess the psychometric properties of a questionnaire in order to address social consequences in Iranian divorced women. Methods: This was an exploratory mixed method study conducted during 2012 to 2014. According to the grounded theory approach in the first phase, social exclusion was extracted as a core of understanding process in participants. Based on, 47 preliminary generated items reliability and validity were assessed. In the second phase, the divorced women were recruited from a safe community center in Tehran through convenience sampling. Results: Exploratory factor analysis conducted on the questionnaires of 150 divorced women with mean age 41.76±8.49 yr, in that, indicated five dimensions, discriminative marital status, economic dependence on marital status, exclusionary marital status, and traumatic marital status health risks and, frightening marital status that jointly accounted for the 64% of the variance observed. An expert panel approved the face and content validity of the developed tool. The Cronbach’s alpha coefficient and the Intra-class Correlation Coefficient were found to be 0.70 and 0.85, respectively. Conclusion: The present study provided a valid and reliable measure as Social Exclusion Questionnaire in Iranian divorced women (SEQ-IDW) to address social post-divorce consequences, which might help to improve women’s social health. PMID:28560195
Zhang, Yan-jun; Liu, Li-li; Hu, Jun-hua; Wu, Yun; Chao, En-xiang; Xiao, Wei
2015-11-01
First with the qualified rate of granules as the evaluation index, significant influencing factors were firstly screened by Plackett-Burman design. Then, with the qualified rate and moisture content as the evaluation indexes, significant factors that affect one-step pelletization technology were further optimized by Box-Behnken design; experimental data were imitated by multiple regression and second-order polynomial equation; and response surface method was used for predictive analysis of optimal technology. The best conditions were as follows: inlet air temperature of 85 degrees C, sample introduction speed of 33 r x min(-1), density of concrete 1. 10. One-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology was stable and feasible with good predictability, which provided reliable basis for the industrialized production of Biqiu granules.
Research on test of alkali-resistant glass fibre enhanced seawater coral aggregate concrete
NASA Astrophysics Data System (ADS)
Liu, Leiyang; Wang, Xingquan
2017-12-01
It is proposed in the 13th five-year plan that reefs of the south China sea should be constructed. In the paper, an innovative thinking was proposed for the first time in order to realize local material acquisition in island construction and life dependence on sea, namely alkali-resistant glass fibre is mixed in coralaggregate concrete as reinforcing material. The glass fibre is characterized by low price, low hardness, good dispersibility and convenient construction. Reliable guarantee is provided for widely applying the material in future projects. In the paper, an orthogonal test method is firstly applied to determine the mix proportion of grade C50 coral aggregate concrete. Then, the design plan ofmix proportion of alkali-resistant glass fibre enhanced seawater coral aggregate concrete is determined. Finally, the influence law of alkali-resistant glass fibre dosageon tensile compressiveflexture strength of seawatercoralaggregate concrete is made clear.
Pulse oximeter sensor application during neonatal resuscitation: a randomized controlled trial.
Louis, Deepak; Sundaram, Venkataseshan; Kumar, Praveen
2014-03-01
This study was done to compare 2 techniques of pulse oximeter sensor application during neonatal resuscitation for faster signal detection. Sensor to infant first (STIF) and then to oximeter was compared with sensor to oximeter first (STOF) and then to infant in ≥28 weeks gestations. The primary outcome was time from completion of sensor application to reliable signal, defined as stable display of heart rate and saturation. Time from birth to sensor application, time taken for sensor application, time from birth to reliable signal, and need to reapply sensor were secondary outcomes. An intention-to-treat analysis was done, and subgroup analysis was done for gestation and need for resuscitation. One hundred fifty neonates were randomized with 75 to each technique. The median (IQR) time from sensor application to detection of reliable signal was longer in STIF group compared with STOF group (16 [15-17] vs. 10 [6-18] seconds; P <0.001). Time taken for application of sensor was longer with STIF technique than with STOF technique (12 [10-16] vs. 11 [9-15] seconds; P = 0.04). Time from birth to reliable signal did not differ between the 2 methods (STIF: 61 [52-76] seconds; STOF: 58 [47-73] seconds [P = .09]). Time taken for signal acquisition was longer with STIF than with STOF in both subgroups. In the delivery room setting, the STOF method recognized saturation and heart rate faster than the STIF method. The time from birth to reliable signal was similar with the 2 methods.
NASA Astrophysics Data System (ADS)
Jones, Adele M.; Pham, A. Ninh; Collins, Richard N.; Waite, T. David
2009-05-01
The rate at which iron- and aluminium-natural organic matter (NOM) complexes dissociate plays a critical role in the transport of these elements given the readiness with which they hydrolyse and precipitate. Despite this, there have only been a few reliable studies on the dissociation kinetics of these complexes suggesting half-times of some hours for the dissociation of Fe(III) and Al(III) from a strongly binding component of NOM. First-order dissociation rate constants are re-evaluated here at pH 6.0 and 8.0 and 25 °C using both cation exchange resin and competing ligand methods for Fe(III) and a cation exchange resin method only for Al(III) complexes. Both methods provide similar results at a particular pH with a two-ligand model accounting satisfactorily for the dissociation kinetics results obtained. For Fe(III), half-times on the order of 6-7 h were obtained for dissociation of the strong component and 4-5 min for dissociation of the weak component. For aluminium, the half-times were on the order of 1.5 h and 1-2 min for the strong and weak components, respectively. Overall, Fe(III) complexes with NOM are more stable than analogous complexes with Al(III), implying Fe(III) may be transported further from its source upon dilution and dispersion.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
Investigation of the Reliability of Bridge Elements Reinforced with Basalt Plastic Fibers
NASA Astrophysics Data System (ADS)
Koval', T. I.
2017-09-01
The poorly studied problem on the reliability and durability of basalt-fiber-reinforced concrete bridge elements is considered. A method of laboratory research into the work of specimens of the concrete under a manyfold cyclic dynamic load is proposed. The first results of such experiments are presented.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Thomas C. Brown; George L. Peterson
2009-01-01
The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...
NASA Astrophysics Data System (ADS)
Hemdan, A.
2016-07-01
Three simple, selective, and accurate spectrophotometric methods have been developed and then validated for the analysis of Benazepril (BENZ) and Amlodipine (AML) in bulk powder and pharmaceutical dosage form. The first method is the absorption factor (AF) for zero order and amplitude factor (P-F) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 238 nm or from their first order spectra at 253 nm. The second method is the constant multiplication coupled with constant subtraction (CM-CS) for zero order and successive derivative subtraction-constant multiplication (SDS-CM) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 240 nm and 238 nm, respectively, or from their first order spectra at 214 nm and 253 nm for Benazepril and Amlodipine respectively. The third method is the novel constant multiplication coupled with derivative zero crossing (CM-DZC) which is a stability indicating assay method for determination of Benazepril and Amlodipine in presence of the main degradation product of Benazepril which is Benazeprilate (BENZT). The three methods were validated as per the ICH guidelines and the standard curves were found to be linear in the range of 5-60 μg/mL for Benazepril and 5-30 for Amlodipine, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.
Hemdan, A
2016-07-05
Three simple, selective, and accurate spectrophotometric methods have been developed and then validated for the analysis of Benazepril (BENZ) and Amlodipine (AML) in bulk powder and pharmaceutical dosage form. The first method is the absorption factor (AF) for zero order and amplitude factor (P-F) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 238nm or from their first order spectra at 253nm. The second method is the constant multiplication coupled with constant subtraction (CM-CS) for zero order and successive derivative subtraction-constant multiplication (SDS-CM) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 240nm and 238nm, respectively, or from their first order spectra at 214nm and 253nm for Benazepril and Amlodipine respectively. The third method is the novel constant multiplication coupled with derivative zero crossing (CM-DZC) which is a stability indicating assay method for determination of Benazepril and Amlodipine in presence of the main degradation product of Benazepril which is Benazeprilate (BENZT). The three methods were validated as per the ICH guidelines and the standard curves were found to be linear in the range of 5-60μg/mL for Benazepril and 5-30 for Amlodipine, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xia, Quan; Wang, Zili; Ren, Yi; Sun, Bo; Yang, Dezhen; Feng, Qiang
2018-05-01
With the rapid development of lithium-ion battery technology in the electric vehicle (EV) industry, the lifetime of the battery cell increases substantially; however, the reliability of the battery pack is still inadequate. Because of the complexity of the battery pack, a reliability design method for a lithium-ion battery pack considering the thermal disequilibrium is proposed in this paper based on cell redundancy. Based on this method, a three-dimensional electric-thermal-flow-coupled model, a stochastic degradation model of cells under field dynamic conditions and a multi-state system reliability model of a battery pack are established. The relationships between the multi-physics coupling model, the degradation model and the system reliability model are first constructed to analyze the reliability of the battery pack and followed by analysis examples with different redundancy strategies. By comparing the reliability of battery packs of different redundant cell numbers and configurations, several conclusions for the redundancy strategy are obtained. More notably, the reliability does not monotonically increase with the number of redundant cells for the thermal disequilibrium effects. In this work, the reliability of a 6 × 5 parallel-series configuration is the optimal system structure. In addition, the effect of the cell arrangement and cooling conditions are investigated.
Semi-automatic for ultrasonic measurement of texture
Thompson, R. Bruce; Smith, John F.; Lee, Seung S.; Li, Yan
1990-02-13
A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improved reliability and accuracy. The method can be utilized in production on moving metal plate or sheet.
Semi-automatic for ultrasonic measurement of texture
Thompson, R.B.; Smith, J.F.; Lee, S.S.; Li, Y.
1990-02-13
A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improved reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figs.
A method for direct measurement of the first-order mass moments of human body segments.
Fujii, Yusaku; Shimada, Kazuhito; Maru, Koichi; Ozawa, Junichi; Lu, Rong-Sheng
2010-01-01
We propose a simple and direct method for measuring the first-order mass moment of a human body segment. With the proposed method, the first-order mass moment of the body segment can be directly measured by using only one precision scale and one digital camera. In the dummy mass experiment, the relative standard uncertainty of a single set of measurements of the first-order mass moment is estimated to be 1.7%. The measured value will be useful as a reference for evaluating the uncertainty of the body segment inertial parameters (BSPs) estimated using an indirect method.
Un-collided-flux preconditioning for the first order transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rigley, M.; Koebbe, J.; Drumm, C.
2013-07-01
Two codes were tested for the first order neutron transport equation using finite element methods. The un-collided-flux solution is used as a preconditioner for each of these methods. These codes include a least squares finite element method and a discontinuous finite element method. The performance of each code is shown on problems in one and two dimensions. The un-collided-flux preconditioner shows good speedup on each of the given methods. The un-collided-flux preconditioner has been used on the second-order equation, and here we extend those results to the first order equation. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
Choosing the optimal wind turbine variant using the ”ELECTRE” method
NASA Astrophysics Data System (ADS)
Ţişcă, I. A.; Anuşca, D.; Dumitrescu, C. D.
2017-08-01
This paper presents a method of choosing the “optimal” alternative, both under certainty and under uncertainty, based on relevant analysis criteria. Taking into account that a product can be assimilated to a system and that the reliability of the system depends on the reliability of its components, the choice of product (the appropriate system decision) can be done using the “ELECTRE” method and depending on the level of reliability of each product. In the paper, the “ELECTRE” method is used in choosing the optimal version of a wind turbine required to equip a wind farm in western Romania. The problems to be solved are related to the current situation of wind turbines that involves reliability problems. A set of criteria has been proposed to compare two or more products from a range of available products: Operating conditions, Environmental conditions during operation, Time requirements. Using the ELECTRE hierarchical mathematical method it was established that on the basis of the obtained coefficients of concordance the optimal variant of the wind turbine and the order of preference of the variants are determined, the values chosen as limits being arbitrary.
Grigg, Josephine; Haakonssen, Eric; Rathbone, Evelyne; Orr, Robin; Keogh, Justin W L
2017-11-13
The aim of this study was to quantify the validity and intra-tester reliability of a novel method of kinematic measurement. The measurement target was the joint angles of an athlete performing a BMX Supercross (SX) gate start action through the first 1.2 s of movement in situ on a BMX SX ramp using a standard gate start procedure. The method employed GoPro® Hero 4 Silver (GoPro Inc., USA) cameras capturing data at 120 fps 720 p on a 'normal' lens setting. Kinovea 0.8.15 (Kinovea.org, France) was used for analysis. Tracking data was exported and angles computed in Matlab (Mathworks®, USA). The gold standard 3D method for joint angle measurement could not safely be employed in this environment, so a rigid angle was used. Validity was measured to be within 2°. Intra-tester reliability was measured by the same tester performing the analysis twice with an average of 55 days between analyses. Intra-tester reliability was high, with an absolute error <6° and <9 frames (0.075 s) across all angles and time points for key positions, respectively. The methodology is valid within 2° and reliable within 6° for the calculation of joint angles in the first ~1.25 s.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Mkentane, K; Van Wyk, J C; Sishi, N; Gumedze, F; Ngoepe, M; Davids, L M; Khumalo, N P
2017-01-01
Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Each rater classified 480 hairs on each occasion. No rater classified any volunteer's 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-01-01
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434
Analysis of Different Hyperspectral Variables for Diagnosing Leaf Nitrogen Accumulation in Wheat.
Tan, Changwei; Du, Ying; Zhou, Jian; Wang, Dunliang; Luo, Ming; Zhang, Yongjian; Guo, Wenshan
2018-01-01
Hyperspectral remote sensing is a rapid non-destructive method for diagnosing nitrogen status in wheat crops. In this study, a quantitative correlation was associated with following parameters: leaf nitrogen accumulation (LNA), raw hyperspectral reflectance, first-order differential hyperspectra, and hyperspectral characteristics of wheat. In this study, integrated linear regression of LNA was obtained with raw hyperspectral reflectance (measurement wavelength = 790.4 nm). Furthermore, an exponential regression of LNA was obtained with first-order differential hyperspectra (measurement wavelength = 831.7 nm). Coefficients ( R 2 ) were 0.813 and 0.847; root mean squared errors (RMSE) were 2.02 g·m -2 and 1.72 g·m -2 ; and relative errors (RE) were 25.97% and 20.85%, respectively. Both the techniques were considered as optimal in the diagnoses of wheat LNA. Nevertheless, the better one was the new normalized variable (SD r - SD b )/(SD r + SD b ) , which was based on vegetation indices of R 2 = 0.935, RMSE = 0.98, and RE = 11.25%. In addition, (SD r - SD b )/(SD r + SD b ) was reliable in the application of a different cultivar or even wheat grown elsewhere. This indicated a superior fit and better performance for (SD r - SD b )/(SD r + SD b ) . For diagnosing LNA in wheat, the newly normalized variable (SD r - SD b )/(SD r + SD b ) was more effective than the previously reported data of raw hyperspectral reflectance, first-order differential hyperspectra, and red-edge parameters.
On the Accuracy of Probabilistic Bucking Load Prediction
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
The buckling strength of thin-walled stiffened or unstiffened, metallic or composite shells is of major concern in aeronautical and space applications. The difficulty to predict the behavior of axially compressed thin-walled cylindrical shells continues to worry design engineers as we enter the third millennium. Thanks to extensive research programs in the late sixties and early seventies and the contributions of many eminent scientists, it is known that buckling strength calculations are affected by the uncertainties in the definition of the parameters of the problem such as definition of loads, material properties, geometric variables, edge support conditions, and the accuracy of the engineering models and analysis tools used in the design phase. The NASA design criteria monographs from the late sixties account for these design uncertainties by the use of a lump sum safety factor. This so-called 'empirical knockdown factor gamma' usually results in overly conservative design. Recently new reliability based probabilistic design procedure for buckling critical imperfect shells have been proposed. It essentially consists of a stochastic approach which introduces an improved 'scientific knockdown factor lambda(sub a)', that is not as conservative as the traditional empirical one. In order to incorporate probabilistic methods into a High Fidelity Analysis Approach one must be able to assess the accuracy of the various steps that must be executed to complete a reliability calculation. In the present paper the effect of size of the experimental input sample on the predicted value of the scientific knockdown factor lambda(sub a) calculated by the First-Order, Second-Moment Method is investigated.
Prevention of medication errors: detection and audit.
Montesi, Germana; Lechi, Alessandro
2009-06-01
1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.
NASA Astrophysics Data System (ADS)
Ohn-Bar, Eshed; Martin, Sujitha; Trivedi, Mohan Manubhai
2013-10-01
We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.
NASA Astrophysics Data System (ADS)
Li, Yan; Kowalski, Piotr M.
2018-07-01
In order to get better understanding of the selective order-disorder transition in pyrochlore compounds, using ab initio methods we calculated the formation energies of coupled cation anti-site and anion Frenkel pair defects and the energy barriers for the oxygen migration for number of families of A2B2 O7 pyrochlore-type compounds. While these parameters have been previously computed with force field-based methods, the ab initio results provide more reliable values that can be confidently used in subsequent analysis. We found a fairly good correlation between the formation energies of the coupled defects and the stability field of pyrochlores. In line with previous studies, the compounds that crystallize in defect fluorite structure are found to have smaller values of coupled defect formation energies than those crystallizing in the pyrochlore phase, although the correlation is not that sharp as in the case of isolated anion Frenkel pair defect. The investigation of the energy barriers for the oxygen migration shows that it is not a good, sole indicator of the tendency of the order-disorder phase transition in pyrochlores. However, we found that the oxygen migration barrier is reduced in the presence of the cation antisite defect. This points at disordering-induced enhancement of oxygen diffusion in pyrochlore compounds.
How to Make Reliable, Washable, and Wearable Textronic Devices
Tao, Xuyuan; Koncar, Vladan; Huang, Tzu-Hao; Shen, Chien-Lung; Ko, Ya-Chi; Jou, Gwo-Tsuen
2017-01-01
In this paper, the washability of wearable textronic (textile-electronic) devices has been studied. Two different approaches aiming at designing, producing, and testing robust washable and reliable smart textile systems are presented. The common point of the two approaches is the use of flexible conductive PCB in order to interface the miniaturized rigid (traditional) electronic devices to conductive threads and tracks within the textile flexible fabric and to connect them to antenna, textile electrodes, sensors, actuators, etc. The first approach consists in the use of TPU films (thermoplastic polyurethane) that are deposited by the press under controlled temperature and pressure parameters in order to protect the conductive thread and electrical contacts. The washability of conductive threads and contact resistances between flexible PCB and conductive threads are tested. The second approach is focused on the protection of the whole system—composed of a rigid electronic device, flexible PCB, and textile substrate—by a barrier made of latex. Three types of prototypes were realized and washed. Their reliabilities are studied. PMID:28338607
Harvey, Judson W.; Wagner, Brian J.; Bencala, Kenneth E.
1996-01-01
Stream water was locally recharged into shallow groundwater flow paths that returned to the stream (hyporheic exchange) in St. Kevin Gulch, a Rocky Mountain stream in Colorado contaminated by acid mine drainage. Two approaches were used to characterize hyporheic exchange: sub-reach-scale measurement of hydraulic heads and hydraulic conductivity to compute streambed fluxes (hydrometric approach) and reachscale modeling of in-stream solute tracer injections to determine characteristic length and timescales of exchange with storage zones (stream tracer approach). Subsurface data were the standard of comparison used to evaluate the reliability of the stream tracer approach to characterize hyporheic exchange. The reach-averaged hyporheic exchange flux (1.5 mL s−1 m−1), determined by hydrometric methods, was largest when stream base flow was low (10 L s−1); hyporheic exchange persisted when base flow was 10-fold higher, decreasing by approximately 30%. Reliability of the stream tracer approach to detect hyporheic exchange was assessed using first-order uncertainty analysis that considered model parameter sensitivity. The stream tracer approach did not reliably characterize hyporheic exchange at high base flow: the model was apparently more sensitive to exchange with surface water storage zones than with the hyporheic zone. At low base flow the stream tracer approach reliably characterized exchange between the stream and gravel streambed (timescale of hours) but was relatively insensitive to slower exchange with deeper alluvium (timescale of tens of hours) that was detected by subsurface measurements. The stream tracer approach was therefore not equally sensitive to all timescales of hyporheic exchange. We conclude that while the stream tracer approach is an efficient means to characterize surface-subsurface exchange, future studies will need to more routinely consider decreasing sensitivities of tracer methods at higher base flow and a potential bias toward characterizing only a fast component of hyporheic exchange. Stream tracer models with multiple rate constants to consider both fast exchange with streambed gravel and slower exchange with deeper alluvium appear to be warranted.
Oh, Hyun Jun; Yang, Il-Hyung
2016-01-01
Objectives: To propose a novel method for determining the three-dimensional (3D) root apex position of maxillary teeth using a two-dimensional (2D) panoramic radiograph image and a 3D virtual maxillary cast model. Methods: The subjects were 10 adult orthodontic patients treated with non-extraction. The multiple camera matrices were used to define transformative relationships between tooth images of the 2D panoramic radiographs and the 3D virtual maxillary cast models. After construction of the root apex-specific projective (RASP) models, overdetermined equations were used to calculate the 3D root apex position with a direct linear transformation algorithm and the known 2D co-ordinates of the root apex in the panoramic radiograph. For verification of the estimated 3D root apex position, the RASP and 3D-CT models were superimposed using a best-fit method. Then, the values of estimation error (EE; mean, standard deviation, minimum error and maximum error) between the two models were calculated. Results: The intraclass correlation coefficient values exhibited good reliability for the landmark identification. The mean EE of all root apices of maxillary teeth was 1.88 mm. The EE values, in descending order, were as follows: canine, 2.30 mm; first premolar, 1.93 mm; second premolar, 1.91 mm; first molar, 1.83 mm; second molar, 1.82 mm; lateral incisor, 1.80 mm; and central incisor, 1.53 mm. Conclusions: Camera calibration technology allows reliable determination of the 3D root apex position of maxillary teeth without the need for 3D-CT scan or tooth templates. PMID:26317151
NASA Astrophysics Data System (ADS)
Hosseini-Hashemi, Shahrokh; Sepahi-Boroujeni, Amin; Sepahi-Boroujeni, Saeid
2018-04-01
Normal impact performance of a system including a fullerene molecule and a single-layered graphene sheet is studied in the present paper. Firstly, through a mathematical approach, a new contact law is derived to describe the overall non-bonding interaction forces of the "hollow indenter-target" system. Preliminary verifications show that the derived contact law gives a reliable picture of force field of the system which is in good agreements with the results of molecular dynamics (MD) simulations. Afterwards, equation of the transversal motion of graphene sheet is utilized on the basis of both the nonlocal theory of elasticity and the assumptions of classical plate theory. Then, to derive dynamic behavior of the system, a set including the proposed contact law and the equations of motion of both graphene sheet and fullerene molecule is solved numerically. In order to evaluate outcomes of this method, the problem is modeled by MD simulation. Despite intrinsic differences between analytical and MD methods as well as various errors arise due to transient nature of the problem, acceptable agreements are established between analytical and MD outcomes. As a result, the proposed analytical method can be reliably used to address similar impact problems. Furthermore, it is found that a single-layered graphene sheet is capable of trapping fullerenes approaching with low velocities. Otherwise, in case of rebound, the sheet effectively absorbs predominant portion of fullerene energy.
Limit states and reliability-based pipeline design. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.
1997-06-01
This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during developmentmore » and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.« less
Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A
2012-01-01
Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
NASA Astrophysics Data System (ADS)
Zularisam, A. W.; Wahida, Norul
2017-07-01
Nickel (II) is one of the most toxic contaminants recognised as a carcinogenic and mutagenic agent which needs complete removal from wastewater before disposal. In the present study, a novel adsorbent called mesoparticle graphene sand composite (MGSCaps) was synthesised from arenga palm sugar and sand by using a green, simple, low cost and efficient methodology. Subsequently, this composite was characterised and identified using field emission scanning electron microscope (FESEM), x-ray diffraction (XRD) and elemental mapping (EM). The adsorption process was investigated and optimised under the experimental parameters such as pH, contact time and bed depth. The results showed that the interaction between nickel (II) and MGSCaps was not ion to ion interaction hence removal of Ni (II) can be applied at any pH. The results were also exhibited the higher contact time and bed depth, the higher removal percentage of nickel (II) occurred. Adsorption kinetic data were modelled using Pseudo-first-order and Pseudo-second-order equation models. The experimental results indicated pseudo-second-order kinetic equation was most suitable to describe the experimental adsorption kinetics data with maximum capacity of 40% nickel (II) removal for the first hour. The equilibrium adsorption data was fitted with Langmuir, and Freundlich isotherms equations. The data suggested that the most fitted equation model is the Freundlich with correlation R2=0.9974. Based on the obtained results, it can be stated that the adsorption method using MGSCaps is an efficient, facile and reliable method for the removal of nickel (II) from waste water.
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
NASA Astrophysics Data System (ADS)
Park, Sahnggi; Kim, Kap-Joong; Kim, Duk-Jun; Kim, Gyungock
2009-02-01
Third order ring resonators are designed and their resonance frequency deviations are analyzed experimentally by processing them with E-beam lithography and ICP etching in a CMOS nano-Fabrication laboratory. We developed a reliable method to identify and reduce experimentally the degree of deviation of each ring resonance frequency before completion of the fabrication process. The identified deviations can be minimized by the way to be presented in this paper. It is expected that this method will provide a significant clue to make a high order multi-channel ring resonators.
78 FR 21929 - Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-12
... Relay Loadability Reliability Standard; Notice of Compliance Filing Take notice that on February 19... Relay Loadability Reliability Standard, Order No. 733, 130 FERC ] 61,221 (2010) (Order No. 733); order..., 136 FERC ] 61,185 (2011). \\2\\ Transmission Relay Loadability Reliability Standard, 138 FERC ] 61,197...
Gillespie, Alex; Reader, Tom W
2016-01-01
Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496
An intelligent detecting system for permeability prediction of MBR.
Han, Honggui; Zhang, Shuo; Qiao, Junfei; Wang, Xiaoshuang
2018-01-01
The membrane bioreactor (MBR) has been widely used to purify wastewater in wastewater treatment plants. However, a critical difficulty of the MBR is membrane fouling. To reduce membrane fouling, in this work, an intelligent detecting system is developed to evaluate the performance of MBR by predicting the membrane permeability. This intelligent detecting system consists of two main parts. First, a soft computing method, based on the partial least squares method and the recurrent fuzzy neural network, is designed to find the nonlinear relations between the membrane permeability and the other variables. Second, a complete new platform connecting the sensors and the software is built, in order to enable the intelligent detecting system to handle complex algorithms. Finally, the simulation and experimental results demonstrate the reliability and effectiveness of the proposed intelligent detecting system, underlying the potential of this system for the online membrane permeability for detecting membrane fouling of MBR.
Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging
NASA Astrophysics Data System (ADS)
Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.
2008-12-01
Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.
Adaptive resolution simulation of oligonucleotides
NASA Astrophysics Data System (ADS)
Netz, Paulo A.; Potestio, Raffaello; Kremer, Kurt
2016-12-01
Nucleic acids are characterized by a complex hierarchical structure and a variety of interaction mechanisms with other molecules. These features suggest the need of multiscale simulation methods in order to grasp the relevant physical properties of deoxyribonucleic acid (DNA) and RNA using in silico experiments. Here we report an implementation of a dual-resolution modeling of a DNA oligonucleotide in physiological conditions; in the presented setup only the nucleotide molecule and the solvent and ions in its proximity are described at the atomistic level; in contrast, the water molecules and ions far from the DNA are represented as computationally less expensive coarse-grained particles. Through the analysis of several structural and dynamical parameters, we show that this setup reliably reproduces the physical properties of the DNA molecule as observed in reference atomistic simulations. These results represent a first step towards a realistic multiscale modeling of nucleic acids and provide a quantitatively solid ground for their simulation using dual-resolution methods.
PRESAGE® as a new calibration method for high intensity focused ultrasound therapy
NASA Astrophysics Data System (ADS)
Costa, M.; McErlean, C.; Rivens, I.; Adamovics, J.; Leach, M. O.; ter Haar, G.; Doran, S. J.
2015-01-01
High Intensity Focused ultrasound (HIFU) is a non-invasive cancer therapy that makes use of the mainly thermal effects of ultrasound to destroy tissue. In order to achieve reliable treatment planning, it is necessary to characterise the ultrasound source (transducer) and to understand how the wave propagates in tissue and the energy deposition in the focal region. This novel exploratory study investigated how HIFU affects PRESAGE®, an optical phantom used for radiotherapy dosimetry, which is potentially a rapid method of calibrating the transducer. Samples, of two different formulations, were exposed to focused ultrasound and imaged using Optical Computed Tomography. First results showed that, PRESAGE® changes colour on ultrasound exposure (darker green regions were observed) with the alterations being related to the acoustic power and sample composition. Future work will involve quantification of these alterations and understanding how to relate them to the mechanisms of action of HIFU.
Reliability and cost analysis methods
NASA Technical Reports Server (NTRS)
Suich, Ronald C.
1991-01-01
In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.
Atomic Decay Data for Modeling K Lines of Iron Peak and Light Odd-Z Elements*
NASA Technical Reports Server (NTRS)
Palmeri, P.; Quinet, P.; Mendoza, C.; Bautista, M. A.; Garcia, J.; Witthoeft, M. C.; Kallman, T. R.
2012-01-01
Complete data sets of level energies, transition wavelengths, A-values, radiative and Auger widths and fluorescence yields for K-vacancy levels of the F, Na, P, Cl, K, Sc, Ti, V, Cr, Mn, Co, Cu and Zn isonuclear sequences have been computed by a Hartree-Fock method that includes relativistic corrections as implemented in Cowan's atomic structure computer suite. The atomic parameters for more than 3 million fine-structure K lines have been determined. Ions with electron number N greater than 9 are treated for the first time, and detailed comparisons with available measurements and theoretical data for ions with N less than or equal to 9 are carried out in order to estimate reliable accuracy ratings.
Equilibrium properties and phase diagram of two-dimensional Yukawa systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, P.; Donko, Z.; Kutasi, K.
Properties of two-dimensional strongly coupled Yukawa systems are explored through molecular dynamics simulations. An effective coupling coefficient {gamma}{sup *} for the liquid phase is introduced on the basis of the constancy of the first peak amplitude of the pair-correlation functions. Thermodynamic quantities are calculated from the pair-correlation function. The solid-liquid transition of the system is investigated through the analysis of the bond-angular order parameter. The static structure function satisfies consistency relation, attesting to the reliability of the computational method. The response is shown to be governed by the correlational part of the inverse compressibility. An analysis of the velocity autocorrelationmore » demonstrates that this latter also exhibits a universal behavior.« less
Train integrity detection risk analysis based on PRISM
NASA Astrophysics Data System (ADS)
Wen, Yuan
2018-04-01
GNSS based Train Integrity Monitoring System (TIMS) is an effective and low-cost detection scheme for train integrity detection. However, as an external auxiliary system of CTCS, GNSS may be influenced by external environments, such as uncertainty of wireless communication channels, which may lead to the failure of communication and positioning. In order to guarantee the reliability and safety of train operation, a risk analysis method of train integrity detection based on PRISM is proposed in this article. First, we analyze the risk factors (in GNSS communication process and the on-board communication process) and model them. Then, we evaluate the performance of the model in PRISM based on the field data. Finally, we discuss how these risk factors influence the train integrity detection process.
About Non-Line-Of-Sight Satellite Detection and Exclusion in a 3D Map-Aided Localization Algorithm
Peyraud, Sébastien; Bétaille, David; Renault, Stéphane; Ortiz, Miguel; Mougel, Florian; Meizel, Dominique; Peyret, François
2013-01-01
Reliable GPS positioning in city environment is a key issue actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the main topic of the present article. A virtual image processing that detects and eliminates possible faulty measurements is the core of this method. This image is generated using the position estimated a priori by the navigation process itself, under road constraints. This position is then updated by measurements to line-of-sight satellites only. This closed-loop real-time processing has shown very first promising full-scale test results. PMID:23344379
Soil variability in engineering applications
NASA Astrophysics Data System (ADS)
Vessia, Giovanna
2014-05-01
Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.
Hubert, C; Lebrun, P; Houari, S; Ziemons, E; Rozet, E; Hubert, Ph
2014-01-01
The understanding of the method is a major concern when developing a stability-indicating method and even more so when dealing with impurity assays from complex matrices. In the presented case study, a Quality-by-Design approach was applied in order to optimize a routinely used method. An analytical issue occurring at the last stage of a long-term stability study involving unexpected impurities perturbing the monitoring of characterized impurities needed to be resolved. A compliant Quality-by-Design (QbD) methodology based on a Design of Experiments (DoE) approach was evaluated within the framework of a Liquid Chromatography (LC) method. This approach allows the investigation of Critical Process Parameters (CPPs), which have an impact on Critical Quality Attributes (CQAs) and, consequently, on LC selectivity. Using polynomial regression response modeling as well as Monte Carlo simulations for error propagation, Design Space (DS) was computed in order to determine robust working conditions for the developed stability-indicating method. This QbD compliant development was conducted in two phases allowing the use of the Design Space knowledge acquired during the first phase to define the experimental domain of the second phase, which constitutes a learning process. The selected working condition was then fully validated using accuracy profiles based on statistical tolerance intervals in order to evaluate the reliability of the results generated by this LC/ESI-MS stability-indicating method. A comparison was made between the traditional Quality-by-Testing (QbT) approach and the QbD strategy, highlighting the benefit of this QbD strategy in the case of an unexpected impurities issue. On this basis, the advantages of a systematic use of the QbD methodology were discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting
2018-02-01
The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.
Total Longitudinal Moment Calculation and Reliability Analysis of Yacht Structures
NASA Astrophysics Data System (ADS)
Zhi, Wenzheng; Lin, Shaofen
In order to check the reliability of the yacht in FRP (Fiber Reinforce Plastic) materials, in this paper, the vertical force and the calculation method of the overall longitudinal bending moment on yacht was analyzed. Specially, this paper focuses on the impact of speed on the still water bending moment on yacht. Then considering the mechanical properties of the cap type stiffeners in composite materials, the ultimate bearing capacity of the yacht has been worked out, finally the reliability of the yacht was calculated with using response surface methodology. The result can be used in yacht design and yacht driving.
Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S
2016-04-01
Psoriasis is an autoimmune skin disease with red and scaly plaques on skin and affecting about 125 million people worldwide. Currently, dermatologist use visual and haptic methods for diagnosis the disease severity. This does not help them in stratification and risk assessment of the lesion stage and grade. Further, current methods add complexity during monitoring and follow-up phase. The current diagnostic tools lead to subjectivity in decision making and are unreliable and laborious. This paper presents a first comparative performance study of its kind using principal component analysis (PCA) based CADx system for psoriasis risk stratification and image classification utilizing: (i) 11 higher order spectra (HOS) features, (ii) 60 texture features, and (iii) 86 color feature sets and their seven combinations. Aggregate 540 image samples (270 healthy and 270 diseased) from 30 psoriasis patients of Indian ethnic origin are used in our database. Machine learning using PCA is used for dominant feature selection which is then fed to support vector machine classifier (SVM) to obtain optimized performance. Three different protocols are implemented using three kinds of feature sets. Reliability index of the CADx is computed. Among all feature combinations, the CADx system shows optimal performance of 100% accuracy, 100% sensitivity and specificity, when all three sets of feature are combined. Further, our experimental result with increasing data size shows that all feature combinations yield high reliability index throughout the PCA-cutoffs except color feature set and combination of color and texture feature sets. HOS features are powerful in psoriasis disease classification and stratification. Even though, independently, all three set of features HOS, texture, and color perform competitively, but when combined, the machine learning system performs the best. The system is fully automated, reliable and accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pinchi, Vilma; De Luca, Federica; Ricciardi, Federico; Focardi, Martina; Piredda, Valentina; Mazzeo, Elena; Norelli, Gian-Aristide
2014-05-01
Paediatricians, radiologists, anthropologists and medico-legal specialists are often called as experts in order to provide age estimation (AE) for forensic purposes. The literature recommends performing the X-rays of the left hand and wrist (HW-XR) for skeletal age estimation. The method most frequently employed is the Greulich and Pyle (GP) method. In addition, the so-called bone-specific techniques are also applied including the method of Tanner Whitehouse (TW) in the latest versions TW2 and TW3. To compare skeletal age and chronological age in a large sample of children and adolescents using GP, TW2 and TW3 methods in order to establish which of these is the most reliable for forensic purposes. The sample consisted of 307 HW-XRs of Italian children or adolescents, 145 females and 162 males aged between 6 and 20 years. The radiographies were scored according to the GP, TW2RUS and TW3RUS methods by one investigator. The results' reliability was assessed using intraclass correlation coefficient. Wilcoxon signed-rank test and Student t-test were performed to search for significant differences between skeletal and chronological ages. The distributions of the differences between estimated and chronological age, by means of boxplots, show how median differences for TW3 and GP methods are generally very close to 0. Hypothesis tests' results were obtained, with respect to the sex, both for the entire group of individuals and people grouped by age. Results show no significant differences among estimated and chronological age for TW3 and, to a lesser extent, GP. The TW2 proved to be the worst of the three methods. Our results support the conclusion that the TW2 method is not reliable for AE for forensic purpose. The GP and TW3 methods have proved to be reliable in males. For females, the best method was found to be TW3. When performing forensic age estimation in subjects around 14 years of age, it could be advisable to use and associate the TW3 and GP methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Medelius, Petro; Jolley, Scott; Fitzpatrick, Lilliana; Vinje, Rubiela; Williams, Martha; Clayton, LaNetra; Roberson, Luke; Smith, Trent; Santiago-Maldonado, Edgardo
2007-01-01
Wiring is a major operational component on aerospace hardware that accounts for substantial weight and volumetric space. Over time wire insulation can age and fail, often leading to catastrophic events such as system failure or fire. The next generation of wiring must be reliable and sustainable over long periods of time. These features will be achieved by the development of a wire insulation capable of autonomous self-healing that mitigates failure before it reaches a catastrophic level. In order to develop a self-healing insulation material, three steps must occur. First, methods of bonding similar materials must be developed that are capable of being initiated autonomously. This process will lead to the development of a manual repair system for polyimide wire insulation. Second, ways to initiate these bonding methods that lead to materials that are similar to the primary insulation must be developed. Finally, steps one and two must be integrated to produce a material that has no residues from the process that degrades the insulating properties of the final repaired insulation. The self-healing technology, teamed with the ability to identify and locate damage, will greatly improve reliability and safety of electrical wiring of critical systems. This paper will address these topics, discuss the results of preliminary testing, and remaining development issues related to self-healing wire insulation.
Top-down and bottom-up definitions of human failure events in human reliability analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids
2014-10-01
In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram
2017-03-01
The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.
2011-01-01
We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter
2016-01-01
Objective: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Methods: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. Results: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). Conclusion: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. Advances in knowledge: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality. PMID:26943836
The Standardization of the Clock Drawing Test (CDT) for People with Stroke Using Rasch Analysis
Yoo, Doo Han; Hong, Deok Gi; Lee, Jae Shin
2014-01-01
[Purpose] The aim of this study was to standardize the clock drawing test (CDT) for people with stroke using Rasch analysis. [Subjects and Methods] Seventeen items of the CDT identified through a literature review were performed by 159 stroke patients. The data was analyzed with Winstep version 3.57 using the Rasch model to examine the unidimensionality of the items’ fit, the distribution of the items’ difficulty, and the reliability and appropriateness of the rating scale. [Result] Ten out of the 159 participations (6.2%) were considered misfit subjects, and one item of the CDT was determined to be a misfit item based on Rasch analysis. The rating scales were judged as suitable because the observed average showed an array of vertical orders and MNSQ values < 2. The separate index and reliability of the subject (1.98, 0.80) and item (6.45, 0.97) showed relatively high values. [Conclusion] This study is the first to examine the CDT scale in stroke patients by Rasch analysis. The CDT is expected to be useful for screening stroke patients with cognitive problems. PMID:24409026
Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient.
Shi, Fengjian; Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua
2017-10-16
In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster-Shafer evidence theory (D-S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D-S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.
Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient
Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua
2017-01-01
In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster–Shafer evidence theory (D–S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D–S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method. PMID:29035341
Synthesizing cognition in neuromorphic electronic systems
Neftci, Emre; Binas, Jonathan; Rutishauser, Ueli; Chicca, Elisabetta; Indiveri, Giacomo; Douglas, Rodney J.
2013-01-01
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina. PMID:23878215
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory
NASA Astrophysics Data System (ADS)
Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei
2015-08-01
For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.
Efficient Unsteady Flow Visualization with High-Order Access Dependencies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544
Reliability of the Inverse Water Volumetry Method to Measure the Volume of the Upper Limb.
Beek, Martinus A; te Slaa, Alexander; van der Laan, Lijckle; Mulder, Paul G H; Rutten, Harm J T; Voogd, Adri C; Luiten, Ernest J T; Gobardhan, Paul D
2015-06-01
Lymphedema of the upper extremity is a common side effect of lymph node dissection or irradiation of the axilla. Several techniques are being applied in order to examine the presence and severity of lymphedema. Measurement of circumference of the upper extremity is most frequently performed. An alternative is the water-displacement method. The aim of this study was to determine the reliability and the reproducibility of the "Inverse Water Volumetry apparatus" (IWV-apparatus) for the measurement of arm volumes. The IWV-apparatus is based on the water-displacement method. Measurements were performed by three breast cancer nurse practitioners on ten healthy volunteers in three weekly sessions. The intra-class correlation coefficient, defined as the ratio of the subject component to the total variance, equaled 0.99. The reliability index is calculated as 0.14 kg. This indicates that only changes in a patient's arm volume measurement of more than 0.14 kg would represent a true change in arm volume, which is about 6% of the mean arm volume of 2.3 kg. The IWV-apparatus proved to be a reliable and reproducible method to measure arm volume.
NASA Technical Reports Server (NTRS)
Mcclain, W. D.
1977-01-01
A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.
A Capacity Forecast Model for Volatile Data in Maintenance Logistics
NASA Astrophysics Data System (ADS)
Berkholz, Daniel
2009-05-01
Maintenance, repair and overhaul processes (MRO processes) are elaborate and complex. Rising demands on these after sales services require reliable production planning and control methods particularly for maintaining valuable capital goods. Downtimes lead to high costs and an inability to meet delivery due dates results in severe contract penalties. Predicting the required capacities for maintenance orders in advance is often difficult due to unknown part conditions unless the goods are actually inspected. This planning uncertainty results in extensive capital tie-up by rising stock levels within the whole MRO network. The article outlines an approach to planning capacities when maintenance data forecasting is volatile. It focuses on the development of prerequisites for a reliable capacity planning model. This enables a quick response to maintenance orders by employing appropriate measures. The information gained through the model is then systematically applied to forecast both personnel capacities and the demand for spare parts. The improved planning reliability can support MRO service providers in shortening delivery times and reducing stock levels in order to enhance the performance of their maintenance logistics.
In situ hydrogen consumption kinetics as an indicator of subsurface microbial activity
Harris, S.H.; Smith, R.L.; Suflita, J.M.
2007-01-01
There are few methods available for broadly assessing microbial community metabolism directly within a groundwater environment. In this study, hydrogen consumption rates were estimated from in situ injection/withdrawal tests conducted in two geochemically varying, contaminated aquifers as an approach towards developing such a method. The hydrogen consumption first-order rates varied from 0.002 nM h-1 for an uncontaminated, aerobic site to 2.5 nM h-1 for a contaminated site where sulfate reduction was a predominant process. The method could accommodate the over three orders of magnitude range in rates that existed between subsurface sites. In a denitrifying zone, the hydrogen consumption rate (0.02 nM h-1) was immediately abolished in the presence of air or an antibiotic mixture, suggesting that such measurements may also be sensitive to the effects of environmental perturbations on field microbial activities. Comparable laboratory determinations with sediment slurries exhibited hydrogen consumption kinetics that differed substantially from the field estimates. Because anaerobic degradation of organic matter relies on the rapid consumption of hydrogen and subsequent maintenance at low levels, such in situ measures of hydrogen turnover can serve as a key indicator of the functioning of microbial food webs and may be more reliable than laboratory determinations. ?? 2007 Federation of European Microbiological Societies.
Enhancement of observability and protection of smart power system
NASA Astrophysics Data System (ADS)
Siddique, Abdul Hasib
It is important for a modern power grid to be smarter in order to provide reliable and sustainable supply of electricity. Traditional way of receiving data from the wired system is a very old and outdated technology. For a quicker and better response from the electric system, it is important to look at wireless systems as a feasible option. In order to enhance the observability and protection it is important to integrate wireless technology with the modern power system. In this thesis, wireless network based architecture for wide area monitoring and an alternate method for performing current measurement for protection of generators and motors, has been adopted. There are basically two part of this project. First part deals with the wide area monitoring of the power system and the second part focuses more on application of wireless technology from the protection point of view. A number of wireless method have been adopted in both the part, these includes Zigbee, analog transmission (Both AM and FM) and digital transmission. The main aim of our project was to propose a cost effective wide area monitoring and protection method which will enhance the observability and stability of power grid. A new concept of wireless integration in the power protection system has been implemented in this thesis work.
FY12 End of Year Report for NEPP DDR2 Reliability
NASA Technical Reports Server (NTRS)
Guertin, Steven M.
2013-01-01
This document reports the status of the NASA Electronic Parts and Packaging (NEPP) Double Data Rate 2 (DDR2) Reliability effort for FY2012. The task expanded the focus of evaluating reliability effects targeted for device examination. FY11 work highlighted the need to test many more parts and to examine more operating conditions, in order to provide useful recommendations for NASA users of these devices. This year's efforts focused on development of test capabilities, particularly focusing on those that can be used to determine overall lot quality and identify outlier devices, and test methods that can be employed on components for flight use. Flight acceptance of components potentially includes considerable time for up-screening (though this time may not currently be used for much reliability testing). Manufacturers are much more knowledgeable about the relevant reliability mechanisms for each of their devices. We are not in a position to know what the appropriate reliability tests are for any given device, so although reliability testing could be focused for a given device, we are forced to perform a large campaign of reliability tests to identify devices with degraded reliability. With the available up-screening time for NASA parts, it is possible to run many device performance studies. This includes verification of basic datasheet characteristics. Furthermore, it is possible to perform significant pattern sensitivity studies. By doing these studies we can establish higher reliability of flight components. In order to develop these approaches, it is necessary to develop test capability that can identify reliability outliers. To do this we must test many devices to ensure outliers are in the sample, and we must develop characterization capability to measure many different parameters. For FY12 we increased capability for reliability characterization and sample size. We increased sample size this year by moving from loose devices to dual inline memory modules (DIMMs) with an approximate reduction of 20 to 50 times in terms of per device under test (DUT) cost. By increasing sample size we have improved our ability to characterize devices that may be considered reliability outliers. This report provides an update on the effort to improve DDR2 testing capability. Although focused on DDR2, the methods being used can be extended to DDR and DDR3 with relative ease.
Eliasson, Kristina; Palm, Peter; Nyman, Teresia; Forsman, Mikael
2017-07-01
A common way to conduct practical risk assessments is to observe a job and report the observed long term risks for musculoskeletal disorders. The aim of this study was to evaluate the inter- and intra-observer reliability of ergonomists' risk assessments without the support of an explicit risk assessment method. Twenty-one experienced ergonomists assessed the risk level (low, moderate, high risk) of eight upper body regions, as well as the global risk of 10 video recorded work tasks. Intra-observer reliability was assessed by having nine of the ergonomists repeat the procedure at least three weeks after the first assessment. The ergonomists made their risk assessment based on his/her experience and knowledge. The statistical parameters of reliability included agreement in %, kappa, linearly weighted kappa, intraclass correlation and Kendall's coefficient of concordance. The average inter-observer agreement of the global risk was 53% and the corresponding weighted kappa (K w ) was 0.32, indicating fair reliability. The intra-observer agreement was 61% and 0.41 (K w ). This study indicates that risk assessments of the upper body, without the use of an explicit observational method, have non-acceptable reliability. It is therefore recommended to use systematic risk assessment methods to a higher degree. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Acar, Nihat; Karakasli, Ahmet; Karaarslan, Ahmet; Mas, Nermin Ng; Hapa, Onur
2017-01-01
Volumetric measurements of benign tumors enable surgeons to trace volume changes during follow-up periods. For a volumetric measurement technique to be applicable, it should be easy, rapid, and inexpensive and should carry a high interobserver reliability. We aimed to assess the interobserver reliability of a volumetric measurement technique using the Cavalier's principle of stereological methods. The computerized tomography (CT) of 15 patients with a histopathologically confirmed diagnosis of enchondroma with variant tumor sizes and localizations was retrospectively reviewed for interobserver reliability evaluation of the volumetric stereological measurement with the Cavalier's principle, V = t × [((SU) × d) /SL]2 × Σ P. The volumes of the 15 tumors collected by the observers are demonstrated in Table 1. There was no statistical significance between the first and second observers ( p = 0.000 and intraclass correlation coefficient = 0.970) and between the first and third observers ( p = 0.000 and intraclass correlation coefficient = 0.981). No statistical significance was detected between the second and third observers ( p = 0.000 and intraclass correlation coefficient = 0.976). The Cavalier's principle with the stereological technique using the CT scans is an easy, rapid, and inexpensive technique in volumetric evaluation of enchondromas with a trustable interobserver reliability.
A 3D Model for Eddy Current Inspection in Aeronautics: Application to Riveted Structures
NASA Astrophysics Data System (ADS)
Paillard, S.; Pichenot, G.; Lambert, M.; Voillaume, H.; Dominguez, N.
2007-03-01
Eddy current technique is currently an operational tool used for fastener inspection which is an important issue for the maintenance of aircraft structures. The industry calls for faster, more sensitive and reliable NDT techniques for the detection and characterization of potential flaws nearby rivet. In order to reduce the development time and to optimize the design and the performances assessment of an inspection procedure, the CEA and EADS have started a collaborative work aiming at extending the modeling features of the CIVA non destructive simulation plat-form in order to handle the configuration of a layered planar structure with a rivet and an embedded flaw nearby. Therefore, an approach based on the Volume Integral Method using the Green dyadic formalism which greatly increases computation efficiency has been developed. The first step, modeling the rivet without flaw as a hole in a multi-stratified structure, has been reached and validated in several configurations with experimental data.
Dos Santos Ribeiro, I C N; Lima Neto, F P; Santos, C A F
2012-12-19
Allelic patterns and genetic distances were examined in a collection of 103 foreign and Brazilian mango (Mangifera indica) accessions in order to develop a reference database to support cultivar protection and breeding programs. An UPGMA dendrogram was generated using Jaccard's coefficients from a distance matrix based on 50 alleles of 12 microsatellite loci. The base pair number was estimated by the method of inverse mobility. The cophenetic correlation was 0.8. The accessions had a coefficient of similarity from 30 to 100%, which reflects high genetic variability. Three groups were observed in the UPGMA dendrogram; the first group was formed predominantly by foreign accessions, the second group was formed by Brazilian accessions, and the Dashehari accession was isolated from the others. The 50 microsatellite alleles did not separate all 103 accessions, indicating that there are duplicates in this mango collection. These 12 microsatellites need to be validated in order to establish a reliable set to identify mango cultivars.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
Measurement and Reliability of Response Inhibition
Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.
2012-01-01
Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308
Finn, Natalie K; Torres, Elisa M; Ehrhart, Mark G; Roesch, Scott C; Aarons, Gregory A
2016-08-01
The Implementation Leadership Scale (ILS) is a brief, pragmatic, and efficient measure that can be used for research or organizational development to assess leader behaviors and actions that actively support effective implementation of evidence-based practices (EBPs). The ILS was originally validated with mental health clinicians. This study validates the ILS factor structure with providers in community-based organizations (CBOs) providing child welfare services. Participants were 214 service providers working in 12 CBOs that provide child welfare services. All participants completed the ILS, reporting on their immediate supervisor. Confirmatory factor analyses were conducted to examine the factor structure of the ILS. Internal consistency reliability and measurement invariance were also examined. Confirmatory factor analyses showed acceptable fit to the hypothesized first- and second-order factor structure. Internal consistency reliability was strong and there was partial measurement invariance for the first-order factor structure when comparing child welfare and mental health samples. The results support the use of the ILS to assess leadership for implementation of EBPs in child welfare organizations. © The Author(s) 2016.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
How reliable is the linear noise approximation of gene regulatory networks?
2013-01-01
Background The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems. Results We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations. Conclusion The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks. PMID:24266939
Quantitative Susceptibility Mapping of Human Brain Reflects Spatial Variation in Tissue Composition
Li, Wei; Wu, Bing; Liu, Chunlei
2011-01-01
Image phase from gradient echo MRI provides a unique contrast that reflects brain tissue composition variations, such as iron and myelin distribution. Phase imaging is emerging as a powerful tool for the investigation of functional brain anatomy and disease diagnosis. However, the quantitative value of phase is compromised by its nonlocal and orientation dependent properties. There is an increasing need for reliable quantification of magnetic susceptibility, the intrinsic property of tissue. In this study, we developed a novel and accurate susceptibility mapping method that is also phase-wrap insensitive. The proposed susceptibility mapping method utilized two complementary equations: (1) the Fourier relationship of phase and magnetic susceptibility; and (2) the first-order partial derivative of the first equation in the spatial frequency domain. In numerical simulation, this method reconstructed the susceptibility map almost free of streaking artifact. Further, the iterative implementation of this method allowed for high quality reconstruction of susceptibility maps of human brain in vivo. The reconstructed susceptibility map provided excellent contrast of iron-rich deep nuclei and white matter bundles from surrounding tissues. Further, it also revealed anisotropic magnetic susceptibility in brain white matter. Hence, the proposed susceptibility mapping method may provide a powerful tool for the study of brain physiology and pathophysiology. Further elucidation of anisotropic magnetic susceptibility in vivo may allow us to gain more insight into the white matter microarchitectures. PMID:21224002
Testing Standard Reliability Criteria
ERIC Educational Resources Information Center
Sherry, David
2017-01-01
Maul's paper, "Rethinking Traditional Methods of Survey Validation" (Andrew Maul), contains two stages. First he presents empirical results that cast doubt on traditional methods for validating psychological measurement instruments. These results motivate the second stage, a critique of current conceptions of psychological measurement…
NASA Astrophysics Data System (ADS)
Tacnet, Jean-Marc; Dupouy, Guillaume; Carladous, Simon; Dezert, Jean; Batton-Hubert, Mireille
2017-04-01
In mountain areas, natural phenomena such as snow avalanches, debris-flows and rock-falls, put people and objects at risk with sometimes dramatic consequences. Risk is classically considered as a combination of hazard, the combination of the intensity and frequency of the phenomenon, and vulnerability which corresponds to the consequences of the phenomenon on exposed people and material assets. Risk management consists in identifying the risk level as well as choosing the best strategies for risk prevention, i.e. mitigation. In the context of natural phenomena in mountainous areas, technical and scientific knowledge is often lacking. Risk management decisions are therefore based on imperfect information. This information comes from more or less reliable sources ranging from historical data, expert assessments, numerical simulations etc. Finally, risk management decisions are the result of complex knowledge management and reasoning processes. Tracing the information and propagating information quality from data acquisition to decisions are therefore important steps in the decision-making process. One major goal today is therefore to assist decision-making while considering the availability, quality and reliability of information content and sources. A global integrated framework is proposed to improve the risk management process in a context of information imperfection provided by more or less reliable sources: uncertainty as well as imprecision, inconsistency and incompleteness are considered. Several methods are used and associated in an original way: sequential decision context description, development of specific multi-criteria decision-making methods, imperfection propagation in numerical modeling and information fusion. This framework not only assists in decision-making but also traces the process and evaluates the impact of information quality on decision-making. We focus and present two main developments. The first one relates to uncertainty and imprecision propagation in numerical modeling using both classical Monte-Carlo probabilistic approach and also so-called Hybrid approach using possibility theory. Second approach deals with new multi-criteria decision-making methods which consider information imperfection, source reliability, importance and conflict, using fuzzy sets as well as possibility and belief function theories. Implemented methods consider information imperfection propagation and information fusion in total aggregation methods such as AHP (Saaty, 1980) or partial aggregation methods such as the Electre outranking method (see Soft Electre Tri ) or decisions in certain but also risky or uncertain contexts (see new COWA-ER and FOWA-ER- Cautious and Fuzzy Ordered Weighted Averaging-Evidential Reasoning). For example, the ER-MCDA methodology considers expert assessment as a multi-criteria decision process based on imperfect information provided by more or less heterogeneous, reliable and conflicting sources: it mixes AHP, fuzzy sets theory, possibility theory and belief function theory using DSmT (Dezert-Smarandache Theory) framework which provides powerful fusion rules.
Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability
NASA Technical Reports Server (NTRS)
Kolodziej, P.; Rasky, D. J.
2002-01-01
In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.
Emery, John M.; Field, Richard V.; Foulk, James W.; ...
2015-05-26
Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less
A comparison of matrix methods for calculating eigenvalues in acoustically lined ducts
NASA Technical Reports Server (NTRS)
Watson, W.; Lansing, D. L.
1976-01-01
Three approximate methods - finite differences, weighted residuals, and finite elements - were used to solve the eigenvalue problem which arises in finding the acoustic modes and propagation constants in an absorptively lined two-dimensional duct without airflow. The matrix equations derived for each of these methods were solved for the eigenvalues corresponding to various values of wall impedance. Two matrix orders, 20 x 20 and 40 x 40, were used. The cases considered included values of wall admittance for which exact eigenvalues were known and for which several nearly equal roots were present. Ten of the lower order eigenvalues obtained from the three approximate methods were compared with solutions calculated from the exact characteristic equation in order to make an assessment of the relative accuracy and reliability of the three methods. The best results were given by the finite element method using a cubic polynomial. Excellent accuracy was consistently obtained, even for nearly equal eigenvalues, by using a 20 x 20 order matrix.
No meditation-related changes in the auditory N1 during first-time meditation.
Barnes, L J; McArthur, G M; Biedermann, B A; de Lissa, P; Polito, V; Badcock, N A
2018-05-01
Recent studies link meditation expertise with enhanced low-level attention, measured through auditory event-related potentials (ERPs). In this study, we tested the reliability and validity of a recent finding that the N1 ERP in first-time meditators is smaller during meditation than non-meditation - an effect not present in long-term meditators. In the first experiment, we replicated the finding in first-time meditators. In two subsequent experiments, we discovered that this finding was not due to stimulus-related instructions, but was explained by an effect of the order of conditions. Extended exposure to the same tones has been linked with N1 decrement in other studies, and may explain N1 decrement across our two conditions. We give examples of existing meditation and ERP studies that may include similar condition order effects. The role of condition order among first-time meditators in this study indicates the importance of counterbalancing meditation and non-mediation conditions in meditation studies that use event-related potentials. Copyright © 2018 Elsevier B.V. All rights reserved.
Model of load balancing using reliable algorithm with multi-agent system
NASA Astrophysics Data System (ADS)
Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.
2017-04-01
Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less
Development and validation of the Simulation Learning Effectiveness Inventory.
Chen, Shiah-Lian; Huang, Tsai-Wei; Liao, I-Chen; Liu, Chienchi
2015-10-01
To develop and psychometrically test the Simulation Learning Effectiveness Inventory. High-fidelity simulation helps students develop clinical skills and competencies. Yet, reliable instruments measuring learning outcomes are scant. A descriptive cross-sectional survey was used to validate psychometric properties of the instrument measuring students' perception of stimulation learning effectiveness. A purposive sample of 505 nursing students who had taken simulation courses was recruited from a department of nursing of a university in central Taiwan from January 2010-June 2010. The study was conducted in two phases. In Phase I, question items were developed based on the literature review and the preliminary psychometric properties of the inventory were evaluated using exploratory factor analysis. Phase II was conducted to evaluate the reliability and validity of the finalized inventory using confirmatory factor analysis. The results of exploratory and confirmatory factor analyses revealed the instrument was composed of seven factors, named course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration. A further second-order analysis showed comparable fits between a three second-order factor (preparation, process and outcome) and the seven first-order factor models. Internal consistency was supported by adequate Cronbach's alphas and composite reliability. Convergent and discriminant validities were also supported by confirmatory factor analysis. The study provides evidence that the Simulation Learning Effectiveness Inventory is reliable and valid for measuring student perception of learning effectiveness. The instrument is helpful in building the evidence-based knowledge of the effect of simulation teaching on students' learning outcomes. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.
2013-10-01
A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.
Formal Specifications for an Electrical Power Grid System Stability and Reliability
2015-09-01
expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. IRB...analyze the power grid system requirements and express the critical runtime behavior using first-order logic. First, we identify observable...Verification System, and Type systems to name a few [5]. Theorem proving’s specification dimension is dependent on the expressive power of the formal
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
Mittal, Sparsh; Vetter, Jeffrey S.
2015-04-24
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less
A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less
Reliability of doming and toe flexion testing to quantify foot muscle strength.
Ridge, Sarah Trager; Myrer, J William; Olsen, Mark T; Jurgensmeier, Kevin; Johnson, A Wayne
2017-01-01
Quantifying the strength of the intrinsic foot muscles has been a challenge for clinicians and researchers. The reliable measurement of this strength is important in order to assess weakness, which may contribute to a variety of functional issues in the foot and lower leg, including plantar fasciitis and hallux valgus. This study reports 3 novel methods for measuring foot strength - doming (previously unmeasured), hallux flexion, and flexion of the lesser toes. Twenty-one healthy volunteers performed the strength tests during two testing sessions which occurred one to five days apart. Each participant performed each series of strength tests (doming, hallux flexion, and lesser toe flexion) four times during the first testing session (twice with each of two raters) and two times during the second testing session (once with each rater). Intra-class correlation coefficients were calculated to test for reliability for the following comparisons: between raters during the same testing session on the same day (inter-rater, intra-day, intra-session), between raters on different days (inter-rater, inter-day, inter-session), between days for the same rater (intra-rater, inter-day, inter-session), and between sessions on the same day by the same rater (intra-rater, intra-day, inter-session). ICCs showed good to excellent reliability for all tests between days, raters, and sessions. Average doming strength was 99.96 ± 47.04 N. Average hallux flexion strength was 65.66 ± 24.5 N. Average lateral toe flexion was 50.96 ± 22.54 N. These simple tests using relatively low cost equipment can be used for research or clinical purposes. If repeated testing will be conducted on the same participant, it is suggested that the same researcher or clinician perform the testing each time for optimal reliability.
Kim, Gyu Ri; Netuveli, Gopalakrishnan; Blane, David; Peasey, Anne; Malyutina, Sofia; Simonova, Galina; Kubinova, Ruzena; Pajak, Andrzej; Croezen, Simone; Bobak, Martin; Pikhart, Hynek
2015-01-01
Objectives: The aim was to assess the reliability and validity of the quality of life (QoL) instrument CASP-19, and three shorter versions of CASP-12 in large population sample of older adults from the HAPIEE (Health, Alcohol, and Psychosocial factors In Eastern Europe) study. Methods: From the Czech Republic, Russia, and Poland, 13,210 HAPIEE participants aged 50 or older completed the retirement questionnaire including CASP-19 at baseline. Three shorter 12-item versions were also derived from original 19-item instrument. Psychometric validation used confirmatory factor analysis, Cronbach's alpha, Pearson's correlation, and construct validity. Results: The second-order four-factor model of CASP-19 did not provide a good fit to the data. Two-factor CASP-12v.3 including residual covariances for negative items to account for the method effect of negative items had the best fit to the data in all countries (CFI = 0.98, TLI = 0.97, RMSEA = 0.05, and WRMR = 1.65 in the Czech Republic; 0.96, 0.94, 0.07, and 2.70 in Poland; and 0.93, 0.90, 0.08, and 3.04 in Russia). Goodness-of-fit indices for the two-factor structure were substantially better than second-order models. Conclusions: This large population-based study is the first validation study of CASP scale in Central and Eastern Europe (CEE), which includes a general population sample in Russia, Poland, and the Czech Republic. The results of this study have demonstrated that the CASP-12v.3 is a valid and reliable tool for assessing QoL among adults aged 50 years or older. This version of CASP is recommended for use in future studies investigating QoL in the CEE populations. PMID:25059754
Mehrkash, Milad; Azhari, Mojtaba; Mirdamadi, Hamid Reza
2014-01-01
The importance of elastic wave propagation problem in plates arises from the application of ultrasonic elastic waves in non-destructive evaluation of plate-like structures. However, precise study and analysis of acoustic guided waves especially in non-homogeneous waveguides such as functionally graded plates are so complicated that exact elastodynamic methods are rarely employed in practical applications. Thus, the simple approximate plate theories have attracted much interest for the calculation of wave fields in FGM plates. Therefore, in the current research, the classical plate theory (CPT), first-order shear deformation theory (FSDT) and third-order shear deformation theory (TSDT) are used to obtain the transient responses of flexural waves in FGM plates subjected to transverse impulsive loadings. Moreover, comparing the results with those based on a well recognized hybrid numerical method (HNM), we examine the accuracy of the plate theories for several plates of various thicknesses under excitations of different frequencies. The material properties of the plate are assumed to vary across the plate thickness according to a simple power-law distribution in terms of volume fractions of constituents. In all analyses, spatial Fourier transform together with modal analysis are applied to compute displacement responses of the plates. A comparison of the results demonstrates the reliability ranges of the approximate plate theories for elastic wave propagation analysis in FGM plates. Furthermore, based on various examples, it is shown that whenever the plate theories are used within the appropriate ranges of plate thickness and frequency content, solution process in wave number-time domain based on modal analysis approach is not only sufficient but also efficient for finding the transient waveforms in FGM plates. Copyright © 2013 Elsevier B.V. All rights reserved.
Measuring the flexoelectric coefficient of bulk barium titanate from a shock wave experiment
NASA Astrophysics Data System (ADS)
Hu, Taotao; Deng, Qian; Liang, Xu; Shen, Shengping
2017-08-01
In this paper, a phenomenon of polarization introduced by shock waves is experimentally studied. Although this phenomenon has been reported previously in the community of physics, this is the first time to link it to flexoelectricity, the coupling between electric polarization and strain gradients in dielectrics. As the shock waves propagate in a dielectric material, electric polarization is thought to be induced by the strain gradient at the shock front. First, we control the first-order hydrogen gas gun to impact and generate shock waves in unpolarized bulk barium titanate (BT) samples. Then, a high-precision oscilloscope is used to measure the voltage generated by the flexoelectric effect. Based on experimental results, strain elastic wave theory, and flexoelectric theory, a longitudinal flexoelectric coefficient of the bulk BT sample is calculated to be μ 11 = 17.33 × 10 - 6 C/m, which is in accord with the published transverse flexoelectric coefficient. This method effectively suppresses the majority of drawbacks in the quasi-static and low frequency dynamic techniques and provides more reliable results of flexoelectric behaviors.
Bains, William; Xiao, Yao; Yu, Changyong
2015-01-01
The components of life must survive in a cell long enough to perform their function in that cell. Because the rate of attack by water increases with temperature, we can, in principle, predict a maximum temperature above which an active terrestrial metabolism cannot function by analysis of the decomposition rates of the components of life, and comparison of those rates with the metabolites’ minimum metabolic half-lives. The present study is a first step in this direction, providing an analytical framework and method, and analyzing the stability of 63 small molecule metabolites based on literature data. Assuming that attack by water follows a first order rate equation, we extracted decomposition rate constants from literature data and estimated their statistical reliability. The resulting rate equations were then used to give a measure of confidence in the half-life of the metabolite concerned at different temperatures. There is little reliable data on metabolite decomposition or hydrolysis rates in the literature, the data is mostly confined to a small number of classes of chemicals, and the data available are sometimes mutually contradictory because of varying reaction conditions. However, a preliminary analysis suggests that terrestrial biochemistry is limited to environments below ~150–180 °C. We comment briefly on why pressure is likely to have a small effect on this. PMID:25821932
Sampling design by the core-food approach for the Taiwan total diet study on veterinary drugs.
Chen, Chien-Chih; Tsai, Ching-Lun; Chang, Chia-Chin; Ni, Shih-Pei; Chen, Yi-Tzu; Chiang, Chow-Feng
2017-06-01
The core-food (CF) approach, first adopted in the United States in the 1980s, has been widely used by many countries to assess the exposure to dietary hazards at a population level. However, the reliability of exposure estimates (C × CR) depends critically on sampling methods designed for the detected chemical concentrations (C) of each CF to match with the corresponding consumption rate (CR) estimated from the surveyed intake data. In order to reduce the uncertainty of food matching, this study presents a sampling design scheme, namely the subsample method, for the 2016 Taiwan total diet study (TDS) on veterinary drugs. We first combined the four sets of national dietary recall data that covered the entire age strata (1-65+ years), and aggregated them into 307 CFs by their similarity in nutritional values, manufacturing and cooking methods. The 40 CFs pertinent to veterinary drug residues were selected for this study, and 16 subsamples for each CF were designed by weighing their quantities in CR, product brands, manufacturing, processing and cooking methods. The calculated food matching rates of each CF from this study were 84.3-97.3%, which were higher than those obtained from many previous studies using the representative food (RF) method (53.1-57.8%). The subsample method not only considers the variety of food processing and cooking methods, but also it provides better food matching and reduces the uncertainty of exposure assessment.
NASA Astrophysics Data System (ADS)
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Apeldoorn, Adri T.; van Helvoirt, Hans; Ostelo, Raymond W.; Meihuizen, Hanneke; Kamper, Steven J.; van Tulder, Maurits W.; de Vet, Henrica C. W.
2016-01-01
Study design Observational inter-rater reliability study. Objectives To examine: (1) the inter-rater reliability of a modified version of Delitto et al.’s classification-based algorithm for patients with low back pain; (2) the influence of different levels of familiarity with the system; and (3) the inter-rater reliability of algorithm decisions in patients who clearly fit into a subgroup (clear classifications) and those who do not (unclear classifications). Methods Patients were examined twice on the same day by two of three participating physical therapists with different levels of familiarity with the system. Patients were classified into one of four classification groups. Raters were blind to the others’ classification decision. In order to quantify the inter-rater reliability, percentages of agreement and Cohen’s Kappa were calculated. Results A total of 36 patients were included (clear classification n = 23; unclear classification n = 13). The overall rate of agreement was 53% and the Kappa value was 0·34 [95% confidence interval (CI): 0·11–0·57], which indicated only fair inter-rater reliability. Inter-rater reliability for patients with a clear classification (agreement 52%, Kappa value 0·29) was not higher than for patients with an unclear classification (agreement 54%, Kappa value 0·33). Familiarity with the system (i.e. trained with written instructions and previous research experience with the algorithm) did not improve the inter-rater reliability. Conclusion Our pilot study challenges the inter-rater reliability of the classification procedure in clinical practice. Therefore, more knowledge is needed about factors that affect the inter-rater reliability, in order to improve the clinical applicability of the classification scheme. PMID:27559279
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.
1993-01-01
The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.
A novel evaluation strategy for fatigue reliability of flexible nanoscale films
NASA Astrophysics Data System (ADS)
Zheng, Si-Xue; Luo, Xue-Mei; Wang, Dong; Zhang, Guang-Ping
2018-03-01
In order to evaluate fatigue reliability of nanoscale metal films on flexible substrates, here we proposed an effective evaluation way to obtain critical fatigue cracking strain based on the direct observation of fatigue damage sites through conventional dynamic bending testing technique. By this method, fatigue properties and damage behaviors of 930 nm-thick Au films and 600 nm-thick Mo-W multilayers with individual layer thickness 100 nm on flexible polyimide substrates were investigated. Coffin-Manson relationship between the fatigue life and the applied strain range was obtained for the Au films and Mo-W multilayers. The characterization of fatigue damage behaviors verifies the feasibility of this method, which seems easier and more effective comparing with the other testing methods.
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Multi-kw dc power distribution system study program
NASA Technical Reports Server (NTRS)
Berkery, E. A.; Krausz, A.
1974-01-01
The first phase of the Multi-kw dc Power Distribution Technology Program is reported and involves the test and evaluation of a technology breadboard in a specifically designed test facility according to design concepts developed in a previous study on space vehicle electrical power processing, distribution, and control. The static and dynamic performance, fault isolation, reliability, electromagnetic interference characterisitics, and operability factors of high distribution systems were studied in order to gain a technology base for the use of high voltage dc systems in future aerospace vehicles. Detailed technical descriptions are presented and include data for the following: (1) dynamic interactions due to operation of solid state and electromechanical switchgear; (2) multiplexed and computer controlled supervision and checkout methods; (3) pulse width modulator design; and (4) cable design factors.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
Abuali, M M; Katariwala, R; LaBombardi, V J
2012-05-01
The agar proportion method (APM) for determining Mycobacterium tuberculosis susceptibilities is a qualitative method that requires 21 days in order to produce the results. The Sensititre method allows for a quantitative assessment. Our objective was to compare the accuracy, time to results, and ease of use of the Sensititre method to the APM. 7H10 plates in the APM and 96-well microtiter dry MYCOTB panels containing 12 antibiotics at full dilution ranges in the Sensititre method were inoculated with M. tuberculosis and read for colony growth. Thirty-seven clinical isolates were tested using both methods and 26 challenge strains of blinded susceptibilities were tested using the Sensititre method only. The Sensititre method displayed 99.3% concordance with the APM. The APM provided reliable results on day 21, whereas the Sensititre method displayed consistent results by day 10. The Sensititre method provides a more rapid, quantitative, and efficient method of testing both first- and second-line drugs when compared to the gold standard. It will give clinicians a sense of the degree of susceptibility, thus, guiding the therapeutic decision-making process. Furthermore, the microwell plate format without the need for instrumentation will allow its use in resource-poor settings.
Computational understanding of Li-ion batteries
NASA Astrophysics Data System (ADS)
Urban, Alexander; Seo, Dong-Hwa; Ceder, Gerbrand
2016-03-01
Over the last two decades, computational methods have made tremendous advances, and today many key properties of lithium-ion batteries can be accurately predicted by first principles calculations. For this reason, computations have become a cornerstone of battery-related research by providing insight into fundamental processes that are not otherwise accessible, such as ionic diffusion mechanisms and electronic structure effects, as well as a quantitative comparison with experimental results. The aim of this review is to provide an overview of state-of-the-art ab initio approaches for the modelling of battery materials. We consider techniques for the computation of equilibrium cell voltages, 0-Kelvin and finite-temperature voltage profiles, ionic mobility and thermal and electrolyte stability. The strengths and weaknesses of different electronic structure methods, such as DFT+U and hybrid functionals, are discussed in the context of voltage and phase diagram predictions, and we review the merits of lattice models for the evaluation of finite-temperature thermodynamics and kinetics. With such a complete set of methods at hand, first principles calculations of ordered, crystalline solids, i.e., of most electrode materials and solid electrolytes, have become reliable and quantitative. However, the description of molecular materials and disordered or amorphous phases remains an important challenge. We highlight recent exciting progress in this area, especially regarding the modelling of organic electrolytes and solid-electrolyte interfaces.
NASA Astrophysics Data System (ADS)
Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng
2017-12-01
In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.
Aye, Thanda; Oo, Khin Saw; Khin, Myo Thuzar; Kuramoto-Ahuja, Tsugumi; Maruyama, Hitoshi
2017-01-01
[Purpose] The purpose of this study was to investigate reliability of the test of gross motor development second edition (TGMD-2) for Kindergarten children in Myanmar. [Subjects and Methods] Fifty healthy Kindergarten children (23 males, 27 females) whose parents/guardians had given written consent were participated. The subjects were explained and demonstrated all 12 gross motor skills of TGMD-2 before the assessment. Each subject individually performed two trials for each gross motor skill and the performance was video recorded. Three raters separately watched the video recordings and rated for inter-rater reliability. The second assessment was done one month later with 25 out of 50 subjects for test-rest reliability. The video recordings of 12 subjects were randomly selected from the first 50 recordings for intra-rater reliability six weeks after the first assessment. The agreement on the locomotor and object control raw scores and the gross motor quotient (GMQ) were calculated. [Results] The findings of all the reliability coefficients for the locomotor and object control raw scores and the GMQ were interpreted as good and excellent reliability. [Conclusion] The results represented that TGMD-2 is a highly reliable and appropriate assessment tool for assessing gross motor skill development of Kindergarten children in Myanmar. PMID:29184278
Aye, Thanda; Oo, Khin Saw; Khin, Myo Thuzar; Kuramoto-Ahuja, Tsugumi; Maruyama, Hitoshi
2017-10-01
[Purpose] The purpose of this study was to investigate reliability of the test of gross motor development second edition (TGMD-2) for Kindergarten children in Myanmar. [Subjects and Methods] Fifty healthy Kindergarten children (23 males, 27 females) whose parents/guardians had given written consent were participated. The subjects were explained and demonstrated all 12 gross motor skills of TGMD-2 before the assessment. Each subject individually performed two trials for each gross motor skill and the performance was video recorded. Three raters separately watched the video recordings and rated for inter-rater reliability. The second assessment was done one month later with 25 out of 50 subjects for test-rest reliability. The video recordings of 12 subjects were randomly selected from the first 50 recordings for intra-rater reliability six weeks after the first assessment. The agreement on the locomotor and object control raw scores and the gross motor quotient (GMQ) were calculated. [Results] The findings of all the reliability coefficients for the locomotor and object control raw scores and the GMQ were interpreted as good and excellent reliability. [Conclusion] The results represented that TGMD-2 is a highly reliable and appropriate assessment tool for assessing gross motor skill development of Kindergarten children in Myanmar.
Optimal least-squares finite element method for elliptic problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1991-01-01
An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.
A method of measuring three-dimensional scapular attitudes using the optotrak probing system.
Hébert, L J; Moffet, H; McFadyen, B J; St-Vincent, G
2000-01-01
To develop a method to obtain accurate three-dimensional scapular attitudes and to assess their concurrent validity and reliability. In this methodological study, the three-dimensional scapular attitudes were calculated in degrees, using a rotation matrix (cyclic Cardanic sequence), from spatial coordinates obtained with the probing of three non colinear landmarks first on an anatomical model and second on a healthy subject. Although abnormal movement of the scapula is related to shoulder impingement syndrome, it is not clearly understood whether or not scapular motion impairment is a predisposing factor. Characterization of three-dimensional scapular attitudes in planes and at joint angles for which sub-acromial impingement is more likely to occur is not known. The Optotrak probing system was used. An anatomical model of the scapula was built and allowed us to impose scapular attitudes of known direction and magnitude. A local coordinate reference system was defined with three non colinear anatomical landmarks to assess accuracy and concurrent validity of the probing method with fixed markers. Axial rotation angles were calculated from a rotation matrix using a cyclic Cardanic sequence of rotations. The same three non colinear body landmarks were digitized on one healthy subject and the three dimensional scapular attitudes obtained were compared between sessions in order to assess the reliability. The measure of three dimensional scapular attitudes calculated from data using the Optotrak probing system was accurate with means of the differences between imposed and calculated rotation angles ranging from 1.5 degrees to 4.2 degrees. Greatest variations were observed around the third axis of the Cardanic sequence associated with posterior-anterior transverse rotations. The mean difference between the Optotrak probing system method and fixed markers was 1.73 degrees showing a good concurrent validity. Differences between the two methods were generally very low for one and two direction displacements and the largest discrepancies were observed for imposed displacements combining movement about the three axes. The between sessions variation of three dimensional scapular attitudes was less than 10% for most of the arm positions adopted by a healthy subject suggesting a good reliability. The Optotrak probing system used with a standardized protocol lead to accurate, valid and reliable measures of scapular attitudes. Although abnormal range of motion of the scapula is often related to shoulder pathologies, reliable outcome measures to quantify three-dimensional scapular motion on subjects are not available. It is important to establish a standardized protocol to characterize three-dimensional scapular motion on subjects using a method for which the accuracy and validity are known. The method used in the present study has provided such a protocol and will now allow to verify to what extent, scapular motion impairment is linked to the development of specific shoulder pathologies.
Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity where more and more complex flow problems can be tackled with this approach. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by a contra-rotating open rotor. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the methodologies of how to apply the immersed boundary method to this moving boundary problem, we will provide a detailed validation of the aeroacoustic analysis approach employing the Launch Ascent and Vehicle Aerodynamics (LAVA) solver. Two free-stream Mach numbers with M=0.2 and M=0.78 are considered in this analysis that are based on the nominally take-off and cruise flow conditions. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. Spectral analysis is used to determine the dominant wave propagation pattern in the acoustic near-field.
Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio
2017-01-01
To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.
2012-01-01
The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data. PMID:22518107
NASA Astrophysics Data System (ADS)
Molina-Cardín, Alberto; Campuzano, Saioa A.; Rivero, Mercedes; Osete, María Luisa; Gómez-Paccard, Miriam; Pérez-Fuentes, José Carlos; Pavón-Carrasco, F. Javier; Chauvin, Annick; Palencia-Ortas, Alicia
2017-04-01
In this work we present the first archaeomagnetic intensity database for the Iberian Peninsula covering the last 3 millennia. In addition to previously published archaeointensities (about 100 data), we present twenty new high-quality archaeointensities. The new data have been obtained following the Thellier and Thellier method including pTRM-checks and have been corrected for the effect of the anisotropy of thermoremanent magnetization upon archaeointensity estimates. Importantly, about 50% of the new data obtained correspond to the first millennium BC, a period for which there was not possible to develop an intensity palaeosecular variation curve before due to the lack of high-quality archaeointensity data. The different qualities of the data included in the Iberian dataset have been evaluated following different palaeomagnetic criteria, such as the number of specimens analysed, the laboratory protocol applied and the kind of material analysed. Finally, we present the first intensity palaeosecular variation curve for the Iberian Peninsula centred at Madrid for the last 3000 years. In order to obtain the most reliable secular variation curve, it has been generated using only selected high-quality data from the catalogue.
Health e-mavens: identifying active online health information users.
Sun, Ye; Liu, Miao; Krakow, Melinda
2016-10-01
Given the rapid increase of Internet use for effective health communication, it is important for health practitioners to be able to identify and mobilize active users of online health information across various web-based health intervention programmes. We propose the concept 'health e-mavens' to characterize individuals actively engaged in online health information seeking and sharing activities. This study aimed to address three goals: (i) to test the factor structure of health e-mavenism, (ii) to assess the reliability and validity of this construct and (iii) to determine what predictors are associated with health e-mavenism. This study was a secondary analysis of nationally representative data from the 2010 Health Tracking Survey. We assessed the factor structure of health e-mavenism using confirmatory factor analysis and examined socio-demographic variables, health-related factors and use of technology as potential predictors of health e-mavenism through ordered regression analysis. Confirmatory factor analyses showed that a second-order two-factor structure best captured the health e-maven construct. Health e-mavenism comprised two second-order factors, each encompassing two first-order dimensions: information acquisition (consisting of information tracking and consulting) and information transmission (consisting of information posting and sharing). Both first-order and second-order factors exhibited good reliabilities. Several factors were found to be significant predictors of health e-mavenism. This study offers a starting point for further inquiries about health e-mavens. It is a fruitful construct for health promotion research in the age of new media technologies. We conclude with specific recommendations to further develop the health e-maven concept through continued empirical research. © 2015 The Authors. Health Expectations. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
L’vova, M. M.; L’vov, S. Yu.; Komarov, V. B.
Methods of increasing the operating reliability of power transformers, autotransformers and shunting reactors in order to reduce the risk of damage, which accompany internal short circuits and equipment fires and explosions, are considered.
Exact solutions to the time-fractional differential equations via local fractional derivatives
NASA Astrophysics Data System (ADS)
Guner, Ozkan; Bekir, Ahmet
2018-01-01
This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.
Comparison of measured and computed phase functions of individual tropospheric ice crystals
NASA Astrophysics Data System (ADS)
Stegmann, Patrick G.; Tropea, Cameron; Järvinen, Emma; Schnaiter, Martin
2016-07-01
Airplanes passing the incuda (lat. anvils) regions of tropical cumulonimbi-clouds are at risk of suffering an engine power-loss event and engine damage due to ice ingestion (Mason et al., 2006 [1]). Research in this field relies on optical measurement methods to characterize ice crystals; however the design and implementation of such methods presently suffer from the lack of reliable and efficient means of predicting the light scattering from ice crystals. The nascent discipline of direct measurement of phase functions of ice crystals in conjunction with particle imaging and forward modelling through geometrical optics derivative- and Transition matrix-codes for the first time allow us to obtain a deeper understanding of the optical properties of real tropospheric ice crystals. In this manuscript, a sample phase function obtained via the Particle Habit Imaging and Polar Scattering (PHIPS) probe during a measurement campaign in flight over Brazil will be compared to three different light scattering codes. This includes a newly developed first order geometrical optics code taking into account the influence of the Gaussian beam illumination used in the PHIPS device, as well as the reference ray tracing code of Macke and the T-matrix code of Kahnert.
Higgs Boson Production in Association with a Jet at Next-to-Next-to-Leading Order.
Boughezal, Radja; Caola, Fabrizio; Melnikov, Kirill; Petriello, Frank; Schulze, Markus
2015-08-21
We present precise predictions for Higgs boson production in association with a jet. We work in the Higgs effective field theory framework and compute next-to-next-to-leading order QCD corrections to the gluon-gluon and quark-gluon channels, which is sufficient for reliable LHC phenomenology. We present fully differential results as well as total cross sections for the LHC. Our next-to-next-to-leading order predictions reduce the unphysical scale dependence by more than a factor of 2 and enhance the total rate by about twenty percent compared to next-to-leading order QCD predictions. Our results demonstrate for the first time satisfactory convergence of the perturbative series.
Pan, Sha-sha; Huang, Fu-rong; Xiao, Chi; Xian, Rui-yi; Ma, Zhi-guo
2015-10-01
To explore rapid reliable methods for detection of Epicarpium citri grandis (ECG), the experiment using Fourier Transform Attenuated Total Reflection Infrared Spectroscopy (FTIR/ATR) and Fluorescence Spectrum Imaging Technology combined with Multilayer Perceptron (MLP) Neural Network pattern recognition, for the identification of ECG, and the two methods are compared. Infrared spectra and fluorescence spectral images of 118 samples, 81 ECG and 37 other kinds of ECG, are collected. According to the differences in tspectrum, the spectra data in the 550-1 800 cm(-1) wavenumber range and 400-720 nm wavelength are regarded as the study objects of discriminant analysis. Then principal component analysis (PCA) is applied to reduce the dimension of spectroscopic data of ECG and MLP Neural Network is used in combination to classify them. During the experiment were compared the effects of different methods of data preprocessing on the model: multiplicative scatter correction (MSC), standard normal variable correction (SNV), first-order derivative(FD), second-order derivative(SD) and Savitzky-Golay (SG). The results showed that: after the infrared spectra data via the Savitzky-Golay (SG) pretreatment through the MLP Neural Network with the hidden layer function as sigmoid, we can get the best discrimination of ECG, the correct percent of training set and testing set are both 100%. Using fluorescence spectral imaging technology, corrected by the multiple scattering (MSC) results in the pretreatment is the most ideal. After data preprocessing, the three layers of the MLP Neural Network of the hidden layer function as sigmoid function can get 100% correct percent of training set and 96.7% correct percent of testing set. It was shown that the FTIR/ATR and fluorescent spectral imaging technology combined with MLP Neural Network can be used for the identification study of ECG and has the advantages of rapid, reliable effect.
A multiplex primer design algorithm for target amplification of continuous genomic regions.
Ozturk, Ahmet Rasit; Can, Tolga
2017-06-19
Targeted Next Generation Sequencing (NGS) assays are cost-efficient and reliable alternatives to Sanger sequencing. For sequencing of very large set of genes, the target enrichment approach is suitable. However, for smaller genomic regions, the target amplification method is more efficient than both the target enrichment method and Sanger sequencing. The major difficulty of the target amplification method is the preparation of amplicons, regarding required time, equipment, and labor. Multiplex PCR (MPCR) is a good solution for the mentioned problems. We propose a novel method to design MPCR primers for a continuous genomic region, following the best practices of clinically reliable PCR design processes. On an experimental setup with 48 different combinations of factors, we have shown that multiple parameters might effect finding the first feasible solution. Increasing the length of the initial primer candidate selection sequence gives better results whereas waiting for a longer time to find the first feasible solution does not have a significant impact. We generated MPCR primer designs for the HBB whole gene, MEFV coding regions, and human exons between 2000 bp to 2100 bp-long. Our benchmarking experiments show that the proposed MPCR approach is able produce reliable NGS assay primers for a given sequence in a reasonable amount of time.
Quality Evaluation of Raw Moutan Cortex Using the AHP and Gray Correlation-TOPSIS Method
Zhou, Sujuan; Liu, Bo; Meng, Jiang
2017-01-01
Background: Raw Moutan cortex (RMC) is an important Chinese herbal medicine. Comprehensive and objective quality evaluation of Chinese herbal medicine has been one of the most important issues in the modern herbs development. Objective: To evaluate and compare the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Materials and Methods: The percentage composition of gallic acid, catechin, oxypaeoniflorin, paeoniflorin, quercetin, benzoylpaeoniflorin, paeonol in different batches of RMC was determined, and then adopting MATLAB programming to construct the gray correlation-TOPSIS assessment model for quality evaluation of RMC. Results: The quality evaluation results of model evaluation and objective evaluation were consistent, reliable, and stable. Conclusion: The model of gray correlation-TOPSIS can be well applied to the quality evaluation of traditional Chinese medicine with multiple components and has broad prospect in application. SUMMARY The experiment tries to construct a model to evaluate the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Results show the model is reliable and provide a feasible way in evaluating quality of traditional Chinese medicine with multiple components. PMID:28839384
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Stanifer, John W.; Karia, Francis; Voils, Corrine I.; Turner, Elizabeth L.; Maro, Venance; Shimbi, Dionis; Kilawe, Humphrey; Lazaro, Matayo; Patel, Uptal D.
2015-01-01
Introduction Non-communicable diseases are a growing global burden, and structured surveys can identify critical gaps to address this epidemic. In sub-Saharan Africa, there are very few well-tested survey instruments measuring population attributes related to non-communicable diseases. To meet this need, we have developed and validated the first instrument evaluating knowledge, attitudes and practices pertaining to chronic kidney disease in a Swahili-speaking population. Methods and Results Between December 2013 and June 2014, we conducted a four-stage, mixed-methods study among adults from the general population of northern Tanzania. In stage 1, the survey instrument was constructed in English by a group of cross-cultural experts from multiple disciplines and through content analysis of focus group discussions to ensure local significance. Following translation, in stage 2, we piloted the survey through cognitive and structured interviews, and in stage 3, in order to obtain initial evidence of reliability and construct validity, we recruited and then administered the instrument to a random sample of 606 adults. In stage 4, we conducted analyses to establish test-retest reliability and known-groups validity which was informed by thematic analysis of the qualitative data in stages 1 and 2. The final version consisted of 25 items divided into three conceptual domains: knowledge, attitudes and practices. Each item demonstrated excellent test-retest reliability with established content and construct validity. Conclusions We have developed a reliable and valid cross-cultural survey instrument designed to measure knowledge, attitudes and practices of chronic kidney disease in a Swahili-speaking population of Northern Tanzania. This instrument may be valuable for addressing gaps in non-communicable diseases care by understanding preferences regarding healthcare, formulating educational initiatives, and directing development of chronic disease management programs that incorporate chronic kidney disease across sub-Saharan Africa. PMID:25811781
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana; Bigler, Mark
2016-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
Zhonggang, Liang; Hong, Yan
2006-10-01
A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.
Stochastic many-body perturbation theory for anharmonic molecular vibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermes, Matthew R.; Hirata, So, E-mail: sohirata@illinois.edu; CREST, Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012
2014-08-28
A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value ofmore » a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.« less
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
Vieira, Maria Aparecida; Ohara, Conceição Vieira da Silva; de Domenico, Edvane Birelo Lopes
2016-01-01
Abstract Objective: to construct an instrument for the assessment of graduates of undergraduate nursing courses and to validate this instrument through the consensus of specialists. Method: methodological study. In order to elaborate the instrument, documental analysis and a literature review were undertaken. Validation took place through use of the Delphi Conference, between September 2012 and September 2013, in which 36 specialists from Brazilian Nursing participated. In order to analyze reliability, the Cronbach alpha coefficient, the item/total correlation, and the Pearson correlation coefficient were calculated. Results: the instrument was constructed with the participation of specialist nurses representing all regions of Brazil, with experience in lecturing and research. The first Delphi round led to changes in the first instrument, which was restructured and submitted to another round, with a response rate of 94.44%. In the second round, the instrument was validated with a Cronbach alpha of 0.75. Conclusion: the final instrument possessed three dimensions related to the characterization of the graduate, insertion in the job market, and evaluation of the professional training process. This instrument may be used across the territory of Brazil as it is based on the curricular guidelines and contributes to the process of regulation of the quality of the undergraduate courses in Nursing. PMID:27305184
Decentralized Quasi-Newton Methods
NASA Astrophysics Data System (ADS)
Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro
2017-05-01
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.
78 FR 55249 - Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
...; RM11-16-000] Transmission Relay Loadability Reliability Standard; Notice of Compliance Filing Take.... \\1\\ Transmission Relay Loadability Reliability Standard, Order No. 733, 130 FERC ] 61, 221 (2010..., Order No. 733-B, 136 FERC ] 61,185 (2011). \\2\\ Transmission Relay Loadability Reliability Standard, 138...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skees, J. Daniel
2010-06-15
The Energy Policy Act of 2005 established mandatory reliability standard enforcement under a system in which the Federal Energy Regulatory Commission and the Electric Reliability Organization would have their own spheres of responsibility and authority. Recent orders, however, reflect the Commission's frustration with the reliability standard drafting process and suggest that the Electric Reliability Organization's discretion is likely to receive less deference in the future. (author)
A Novel Ontology Approach to Support Design for Reliability considering Environmental Effects
Sun, Bo; Li, Yu; Ye, Tianyuan
2015-01-01
Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis. PMID:25821857
A novel ontology approach to support design for reliability considering environmental effects.
Sun, Bo; Li, Yu; Ye, Tianyuan; Ren, Yi
2015-01-01
Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis.
NASA Astrophysics Data System (ADS)
Alfano, M.; Bisagni, C.
2017-01-01
The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.
Toward a unifying taxonomy and definition for meditation
Nash, Jonathan D.; Newberg, Andrew
2013-01-01
One of the well-documented concerns confronting scholarly discourse about meditation is the plethora of semantic constructs and the lack of a unified definition and taxonomy. In recent years there have been several notable attempts to formulate new lexicons in order to define and categorize meditation methods. While these constructs have been useful and have encountered varying degrees of acceptance, they have also been subject to misinterpretation and debate, leaving the field devoid of a consensual paradigm. This paper attempts to influence this ongoing discussion by proposing two new models which hold the potential for enhanced scientific reliability and acceptance. Regarding the quest for a universally acceptable taxonomy, we suggest a paradigm shift away from the norm of fabricatIng new terminology from a first-person perspective. As an alternative, we propose a new taxonomic system based on the historically well-established and commonly accepted third-person paradigm of Affect and Cognition, borrowed, in part, from the psychological and cognitive sciences. With regard to the elusive definitional problem, we propose a model of meditation which clearly distinguishes “method” from “state” and is conceptualized as a dynamic process which is inclusive of six related but distinct stages. The overall goal is to provide researchers with a reliable nomenclature with which to categorize and classify diverse meditation methods, and a conceptual framework which can provide direction for their research and a theoretical basis for their findings. PMID:24312060
Laser notching ceramics for reliable fracture toughness testing
Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...
2015-09-19
A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less
Psychometric Properties of the Children's Automatic Thoughts Scale (CATS) in Chinese Adolescents.
Sun, Ling; Rapee, Ronald M; Tao, Xuan; Yan, Yulei; Wang, Shanshan; Xu, Wei; Wang, Jianping
2015-08-01
The Children's Automatic Thoughts Scale (CATS) is a 40-item self-report questionnaire designed to measure children's negative thoughts. This study examined the psychometric properties of the Chinese translation of the CATS. Participants included 1,993 students (average age = 14.73) from three schools in Mainland China. A subsample of the participants was retested after 4 weeks. Confirmatory factor analysis replicated the original structure with four first-order factors loading on a single higher-order factor. The convergent and divergent validity of the CATS were good. The CATS demonstrated high internal consistency and test-retest reliability. Boys scored higher on the CATS-hostility subscale, but there were no other gender differences. Older adolescents (15-18 years) reported higher scores than younger adolescents (12-14 years) on the total score and on the physical threat, social threat, and hostility subscales. The CATS proved to be a reliable and valid measure of automatic thoughts in Chinese adolescents.
Thermal inactivation kinetics of hepatitis A virus in homogenized clam meat (Mercenaria mercenaria).
Bozkurt, H; D'Souza, D H; Davidson, P M
2015-09-01
Epidemiological evidence suggests that hepatitis A virus (HAV) is the most common pathogen transmitted by bivalve molluscs such as clams, cockles, mussels and oysters. This study aimed to generate thermal inactivation kinetics for HAV as a first step to design adequate thermal processes to control clam-associated HAV outbreaks. Survivor curves and thermal death curves were generated for different treatment times (0-6 min) at different temperatures (50-72°C) and Weibull and first-order models were compared. D-values for HAV ranged from 47·37 ± 1·23 to 1·55 ± 0·12 min for the first-order model and 64·43 ± 3·47 to 1·25 ± 0·45 min for the Weibull model at temperatures from 50 to 72°C. z-Values for HAV in clams were 12·97 ± 0·59°C and 14·83 ± 0·0·28°C using the Weibull and first-order model respectively. The calculated activation energies for the first-order and Weibull model were 145 and 170 kJ mole(-1) respectively. The Weibull model described the thermal inactivation behaviour of HAV better than the first-order model. This study provides novel and precise information on thermal inactivation kinetics of HAV in homogenized clams. This will enable reliable thermal process calculations for HAV inactivation in clams and closely related seafood. © 2015 The Society for Applied Microbiology.
Reliability of a science admission test (HAM-Nat) at Hamburg medical school
Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang
2011-01-01
Objective: The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). Methods: 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Results: Parallel forms reliability for four different test versions ranged from rtt=.53 to rtt=.67. The retest reliabilities of the HN2007 halves were rtt=.54 and rtt =.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. Conclusions: The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students’ achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test. PMID:21866246
Numerical simulation of the cavitation characteristics of a mixed-flow pump
NASA Astrophysics Data System (ADS)
Chen, T.; Li, S. R.; Li, W. Z.; Liu, Y. L.; Wu, D. Z.; Wang, L. Q.
2013-12-01
As a kind of general equipment for fluid transportation, pumps were widely used in industry which includes many applications of high pressure, temperature and toxic fluids transportations. Performances of pumps affect the safety and reliability of the whole special equipment system. Cavitation in pumps cause the loss of performance and erosion of the blade, which could affect the running stability and reliability of the pump system. In this paper, a kind of numerical method for cavitaion performance prediction was presented. In order to investigate the accuracy of the method, CFD flow analysis and cavitation performance predictions of a mixed-flow pump were carried out. The numerical results were compared with the test results.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Comparison of in vivo 3D cone-beam computed tomography tooth volume measurement protocols.
Forst, Darren; Nijjar, Simrit; Flores-Mir, Carlos; Carey, Jason; Secanell, Marc; Lagravere, Manuel
2014-12-23
The objective of this study is to analyze a set of previously developed and proposed image segmentation protocols for precision in both intra- and inter-rater reliability for in vivo tooth volume measurements using cone-beam computed tomography (CBCT) images. Six 3D volume segmentation procedures were proposed and tested for intra- and inter-rater reliability to quantify maxillary first molar volumes. Ten randomly selected maxillary first molars were measured in vivo in random order three times with 10 days separation between measurements. Intra- and inter-rater agreement for all segmentation procedures was attained using intra-class correlation coefficient (ICC). The highest precision was for automated thresholding with manual refinements. A tooth volume measurement protocol for CBCT images employing automated segmentation with manual human refinement on a 2D slice-by-slice basis in all three planes of space possessed excellent intra- and inter-rater reliability. Three-dimensional volume measurements of the entire tooth structure are more precise than 3D volume measurements of only the dental roots apical to the cemento-enamel junction (CEJ).
The synchronisation of fractional-order hyperchaos compound system
NASA Astrophysics Data System (ADS)
Noghredani, Naeimadeen; Riahi, Aminreza; Pariz, Naser; Karimpour, Ali
2018-02-01
This paper presents a new compound synchronisation scheme among four hyperchaotic memristor system with incommensurate fractional-order derivatives. First a new controller was designed based on adaptive technique to minimise the errors and guarantee compound synchronisation of four fractional-order memristor chaotic systems. According to the suitability of compound synchronisation as a reliable solution for secure communication, we then examined the application of the proposed adaptive compound synchronisation scheme in the presence of noise for secure communication. In addition, the unpredictability and complexity of the drive systems enhance the security of secure communication. The corresponding theoretical analysis and results of simulation validated the effectiveness of the proposed synchronisation scheme using MATLAB.
Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh
2015-09-01
Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients.
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1994-08-01
This research consists of the parallel development of a new chemical flooding simulator and the application of existing UTCHEM simulation code to model surfactant flooding. The new code is based upon a completely new numerical method that combines for the first time higher order finite difference methods, flux limiters, and implicit algorithms. Early results indicate that this approach has significant advantages in some problems and will likely enable simulation of much larger and more realistic chemical floods once it is fully developed. Additional improvements have also been made to the UTCHEM code and it has been applied for the firstmore » time to the study of stochastic reservoirs with and without horizontal wells to evaluate methods to reduce the cost and risk of surfactant flooding. During the first year of this contract, significant progress has been made on both of these tasks. The authors have found that there are indeed significant differences between the performance predictions based upon the traditional layered reservoir description and the more realistic and flexible descriptions using geostatistics. These preliminary studies of surfactant flooding using horizontal wells shows that although they have significant potential to greatly reduce project life and thus improve the economics of the process, their use requires accurate reservoir descriptions and simulations to be effective. Much more needs to be done to fully understand and optimize their use and develop reliable design criteria.« less
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)
NASA Technical Reports Server (NTRS)
Mott, Diana L.; Bigler, Mark A.
2017-01-01
NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
Navier-Stokes computation of compressible turbulent flows with a second order closure, part 1
NASA Technical Reports Server (NTRS)
Haminh, Hieu; Kollmann, Wolfgang; Vandromme, Dany
1990-01-01
A second order closure turbulence model for compressible flows is developed and implemented in a 2D Reynolds-averaged Navier-Stokes solver. From the beginning where a kappa-epsilon turbulence model was implemented in the bidiagonal implicit method of MACCORMACK (referred to as the MAC3 code) to the final stage of implementing a full second order closure in the efficient line Gauss-Seidel algorithm, numerous work was done, individually and collectively. Besides the collaboration itself, the final product of this work is a second order closure derived from the Launder, Reece, and Rodi model to account for near wall effects, which has been called FRAME model, which stands for FRench-AMerican-Effort. During the reporting period, two different problems were worked out. The first was to provide Ames researchers with a reliable compressible boundary layer code including a wide collection of turbulence models for quick testing of new terms, both in two equations and in second order closure (LRR and FRAME). The second topic was to complete the implementation of the FRAME model in the MAC5 code. The work related to these two different contributions is reported. dilatation in presence of stron shocks. This work, which has been conducted during a work at the Center for Turbulence Research with Zeman aimed also to cros-check earlier assumptions by Rubesin and Vandromme.
Long-Term Reliability of SiGe/Si HBTs From Accelerated Lifetime Testing
NASA Technical Reports Server (NTRS)
Bhattacharya, Pallab
2001-01-01
Accelerated lifetime tests were performed on double-mesa structure Si(0.7)Ge(0.3)/Si npn heterojunction bipolar transistors, grown by molecular beam epitaxy, in the temperature range of 175 C-275 C. The transistors (with 5x20 sq micron emitter area) have DC current gains approx. 40-50 and f(sub T) and f(sub max) of up to 22 GHz and 25 GHz, respectively. It is found that a gradual degradation in these devices is caused by the recombination enhanced impurity diffusion (REID) of boron atoms from the p-type base region and the associated formation of parasitic energy barriers to electron transport from the emitter to collector layers. This REED has been quantitatively modeled and explained, to the first order of approximation, and the agreement with the measured data is good. The mean time to failure (MTTF) of these devices at room temperature under 1.35 x 10(exp 4) A/sq cm current density operation is estimated from the extrapolation of the Arrhenius plots of device lifetime versus reciprocal temperature. The results of the reliability tests offer valuable feedback for SiGe heterostructure design in order to improve the long-term reliability of the devices and circuits made with them. Hot electron induced degradation of the base-emitter junction was also observed during the accelerated lifetime testing. In order to improve the HBT reliability endangered by the hot electrons, deuterium sintered techniques have been proposed. The preliminary results from this study show that a deuterium-sintered HBT is, indeed, more resistant to hot-electron induced base-emitter junction degradation.
Whitfield, Richard H; Newcombe, Robert G; Woollard, Malcolm
2003-12-01
The introduction of the European Resuscitation Guidelines (2000) for cardiopulmonary resuscitation (CPR) and automated external defibrillation (AED) prompted the development of an up-to-date and reliable method of assessing the quality of performance of CPR in combination with the use of an AED. The Cardiff Test of basic life support (BLS) and AED version 3.1 was developed to meet this need and uses standardised checklists to retrospectively evaluate performance from analyses of video recordings and data drawn from a laptop computer attached to a training manikin. This paper reports the inter- and intra-observer reliability of this test. Data used to assess reliability were obtained from an investigation of CPR and AED skill acquisition in a lay responder AED training programme. Six observers were recruited to evaluate performance in 33 data sets, repeating their evaluation after a minimum interval of 3 weeks. More than 70% of the 42 variables considered in this study had a kappa score of 0.70 or above for inter-observer reliability or were drawn from computer data and therefore not subject to evaluator variability. 85% of the 42 variables had kappa scores for intra-observer reliability of 0.70 or above or were drawn from computer data. The standard deviations for inter- and intra-observer measures of time to first shock were 11.6 and 7.7 s, respectively. The inter- and intra-observer reliability for the majority of the variables in the Cardiff Test of BLS and AED version 3.1 is satisfactory. However, reliability is less acceptable with respect to shaking when checking for responsiveness, initial check/clearing of the airway, checks for signs of circulation, time to first shock and performance of interventions in the correct sequence. Further research is required to determine if modifications to the method of assessing these variables can increase reliability.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
A Human Reliability Based Usability Evaluation Method for Safety-Critical Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillippe Palanque; Regina Bernhaupt; Ronald Boring
2006-04-01
Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less
Methods for slow axis beam quality improvement of high power broad area diode lasers
NASA Astrophysics Data System (ADS)
An, Haiyan; Xiong, Yihan; Jiang, Ching-Long J.; Schmidt, Berthold; Treusch, Georg
2014-03-01
For high brightness direct diode laser systems, it is of fundamental importance to improve the slow axis beam quality of the incorporated laser diodes regardless what beam combining technology is applied. To further advance our products in terms of increased brightness at a high power level, we must optimize the slow axis beam quality despite the far field blooming at high current levels. The later is caused predominantly by the built-in index step in combination with the thermal lens effect. Most of the methods for beam quality improvements reported in publications sacrifice the device efficiency and reliable output power. In order to improve the beam quality as well as maintain the efficiency and reliable output power, we investigated methods of influencing local heat generation to reduce the thermal gradient across the slow axis direction, optimizing the built-in index step and discriminating high order modes. Based on our findings, we have combined different methods in our new device design. Subsequently, the beam parameter product (BPP) of a 10% fill factor bar has improved by approximately 30% at 7 W/emitter without efficiency penalty. This technology has enabled fiber coupled high brightness multi-kilowatt direct diode laser systems. In this paper, we will elaborate on the methods used as well as the results achieved.
Mosmuller, David; Tan, Robin; Mulder, Frans; Bachour, Yara; de Vet, Henrica; Don Griot, Peter
2016-10-01
It is essential to have a reliable assessment method in order to compare the results of cleft lip and palate surgery. In this study the computer-based program SymNose, a method for quantitative assessment of the nose and lip, will be assessed on usability and reliability. The symmetry of the nose and lip was measured twice in 50 six-year-old complete and incomplete unilateral cleft lip and palate patients by four observers. For the frontal view the asymmetry level of the nose and upper lip were evaluated and for the basal view the asymmetry level of the nose and nostrils were evaluated. A mean inter-observer reliability when tracing each image once or twice was 0.70 and 0.75, respectively. Tracing the photographs with 2 observers and 4 observers gave a mean inter-observer score of 0.86 and 0.92, respectively. The mean intra-observer reliability varied between 0.80 and 0.84. SymNose is a practical and reliable tool for the retrospective assessment of large caseloads of 2D photographs of cleft patients for research purposes. Moderate to high single inter-observer reliability was found. For future research with SymNose reliable outcomes can be achieved by using the average outcomes of single tracings of two observers. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Goebel, Kai
2013-01-01
This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
An Automated Technique for Estimating Daily Precipitation over the State of Virginia
NASA Technical Reports Server (NTRS)
Follansbee, W. A.; Chamberlain, L. W., III
1981-01-01
Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.
Identification of complex stiffness tensor from waveform reconstruction
NASA Astrophysics Data System (ADS)
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-03-01
An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.
Progress toward automatic classification of human brown adipose tissue using biomedical imaging
NASA Astrophysics Data System (ADS)
Gifford, Aliya; Towse, Theodore F.; Walker, Ronald C.; Avison, Malcom J.; Welch, E. B.
2015-03-01
Brown adipose tissue (BAT) is a small but significant tissue, which may play an important role in obesity and the pathogenesis of metabolic syndrome. Interest in studying BAT in adult humans is increasing, but in order to quantify BAT volume in a single measurement or to detect changes in BAT over the time course of a longitudinal experiment, BAT needs to first be reliably differentiated from surrounding tissue. Although the uptake of the radiotracer 18F-Fluorodeoxyglucose (18F-FDG) in adipose tissue on positron emission tomography (PET) scans following cold exposure is accepted as an indication of BAT, it is not a definitive indicator, and to date there exists no standardized method for segmenting BAT. Consequently, there is a strong need for robust automatic classification of BAT based on properties measured with biomedical imaging. In this study we begin the process of developing an automated segmentation method based on properties obtained from fat-water MRI and PET-CT scans acquired on ten healthy adult subjects.
Development of MCAERO wing design panel method with interactive graphics module
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
Cassagne, E; Caillaud, P D; Besancenot, J P; Thibaudon, M
2007-10-01
Pollen of Poaceae is among the most allergenic pollen in Europe with pollen of birch. It is therefore useful to elaborate models to help pollen allergy sufferers. The objective of this study was to construct forecast models that could predict the first day characterized by a certain level of allergic risk called here the Starting Date of the Allergic Risk (SDAR). Models result from four forecast methods (three summing and one multiple regression analysis) used in the literature. They were applied on Nancy and Strasbourg from 1988 to 2005 and were tested on 2006. Mean Absolute Error and Actual forecast ability test are the parameters used to choose best models, assess and compare their accuracy. It was found, on the whole, that all the models presented a good forecast accuracy which was equivalent. They were all reliable and were used in order to forecast the SDAR in 2006 with contrasting results in forecasting precision.
Coupled Neutron Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.
2009-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Pasias, Ioannis N; Kiriakou, Ioannis K; Proestos, Charalampos
2017-08-15
A fully validated approach for the determination of diastase activity and hydroxymethylfurfural content in honeys were presented in accordance with the official methods. Methods were performed in real honey sample analysis and due to the vast number of collected data sets reliable conclusions about the correlation between the composition and the quality criteria were exported. The limits of detection and quantification were calculated. Accuracy, precision and uncertainty were estimated for the first time in the kinetic and spectrometric techniques using the certified reference material and the determined values were in good accordance with the certified values. PCA and cluster analysis were performed in order to examine the correlation among the artificial feeding of honeybees with carbohydrate supplements and the chemical composition and properties of the honey. Diastase activity, sucrose content and hydroxymethylfurfural content were easily differentiated and these parameters were used for indication of the adulteration of the honey. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Okabe, Shigemitsu; Tsuboi, Toshihiro; Takami, Jun
The power-frequency withstand voltage tests are regulated on electric power equipment in JEC by evaluating the lifetime reliability with a Weibull distribution function. The evaluation method is still controversial in terms of consideration of a plural number of faults and some alternative methods were proposed on this subject. The present paper first discusses the physical meanings of the various kinds of evaluating methods and secondly examines their effects on the power-frequency withstand voltage tests. Further, an appropriate method is investigated for an oil-filled transformer and a gas insulated switchgear with taking notice of dielectric breakdown or partial discharge mechanism under various insulating material and structure conditions and the tentative conclusion gives that the conventional method would be most pertinent under the present conditions.
NASA Astrophysics Data System (ADS)
Lee, Chang-Chun; Huang, Pei-Chen
2018-05-01
The long-term reliability of multi-stacked coatings suffering the bending or rolling load was a severe challenge to extend the lifespan of foregoing structure. In addition, the adhesive strength of dissimilar materials was regarded as the major mechanical reliability concerns among multi-stacked films. However, the significant scale-mismatch from several nano-meter to micro-meter among the multi-stacked coatings causing the numerical accuracy and converged capability issues on fracture-based simulation approach. For those reasons, this study proposed the FEA-based multi-level submodeling and multi-point constraint (MPC) technique to conquer the foregoing scale-mismatch issue. The results indicated that the decent region of first and second-order submodeling can achieve the small error of 1.27% compared with the experimental result and significantly reduced the mesh density and computing time. Moreover, the MPC method adopted in FEA simulation also shown only 0.54% error when the boundary of selected local region was away the concerned critical region following the Saint-Venant principle. In this investigation, two FEA-based approaches were used to conquer the evidently scale mismatch issue when the adhesive strengths of micro and nano-scale multi-stacked coating were taken into account.
The Turkish Adaptation of the Friendship Qualities Scale: A Validity and Reliability Study
ERIC Educational Resources Information Center
Erkan Atik, Zeynep; Cok, Figen; Esen Coban, Aysel; Dogan, Turkan; Guney Karaman, Neslihan
2014-01-01
In this study, the authors have aimed to adapt the Friendship Qualities Scale (FQS) in order to determine friendship relation levels among adolescents. A total of 603 high school students from Ankara Turkey, were selected using convenient sampling to participate in this study. During the course of this study, the FQS was first translated into…
Parsing Heuristic and Forward Search in First-Graders' Game-Play Behavior
ERIC Educational Resources Information Center
Paz, Luciano; Goldin, Andrea P.; Diuk, Carlos; Sigman, Mariano
2015-01-01
Seventy-three children between 6 and 7 years of age were presented with a problem having ambiguous subgoal ordering. Performance in this task showed reliable fingerprints: (a) a non-monotonic dependence of performance as a function of the distance between the beginning and the end-states of the problem, (b) very high levels of performance when the…
2018-01-01
On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the ‘Internet of Things’ (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds. PMID:29748521
Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E
2018-05-10
On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.
Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.
2011-01-01
We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.
Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport
NASA Technical Reports Server (NTRS)
Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.
2008-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Vemić, Ana; Rakić, Tijana; Malenović, Anđelija; Medenica, Mirjana
2015-01-01
The aim of this paper is to present a development of liquid chromatographic method when chaotropic salts are used as mobile phase additives following the QbD principles. The effect of critical process parameters (column chemistry, salt nature and concentration, acetonitrile content and column temperature) on the critical quality attributes (retention of the first and last eluting peak and separation of the critical peak pairs) was studied applying the design of experiments-design space methodology (DoE-DS). D-optimal design is chosen in order to simultaneously examine both categorical and numerical factors in minimal number of experiments. Two ways for the achievement of quality assurance were performed and compared. Namely, the uncertainty originating from the models was assessed by Monte Carlo simulations propagating the error equal to the variance of the model residuals and propagating the error originating from the model coefficients' calculation. The baseline separation of pramipexole and its five impurities is achieved fulfilling all the required criteria while the method validation proved its reliability. Copyright © 2014 Elsevier B.V. All rights reserved.
[A novel quantitative approach to study dynamic anaerobic process at micro scale].
Zhang, Zhong-Liang; Wu, Jing; Jiang, Jian-Kai; Jiang, Jie; Li, Huai-Zhi
2012-11-01
Anaerobic digestion is attracting more and more interests because of its advantages such as low cost and recovery of clean energy etc. In order to overcome the drawbacks of the existed methods to study the dynamic anaerobic process, a novel microscopical quantitative approach at the granule level was developed combining both the microdevice and the quantitative image analysis techniques. This experiment displayed the process and characteristics of the gas production at static state for the first time and the results indicated that the method was of satisfactory repeatability. The gas production process at static state could be divided into three stages including rapid linear increasing stage, decelerated increasing stage and slow linear increasing stage. The rapid linear increasing stage was long and the biogas rate was high under high initial organic loading rate. The results showed that it was feasible to make the anaerobic process to be carried out in the microdevice; furthermore this novel method was reliable and could clearly display the dynamic process of the anaerobic reaction at the micro scale. The results are helpful to understand the anaerobic process.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Finger vein recognition based on personalized weight maps.
Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu
2013-09-10
Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition.
Finger Vein Recognition Based on Personalized Weight Maps
Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu
2013-01-01
Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition. PMID:24025556
The reliability of the Australasian Triage Scale: a meta-analysis
Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir
2015-01-01
BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538
Leading for the long haul: a mixed-method evaluation of the Sustainment Leadership Scale (SLS).
Ehrhart, Mark G; Torres, Elisa M; Green, Amy E; Trott, Elise M; Willging, Cathleen E; Moullin, Joanna C; Aarons, Gregory A
2018-01-19
Despite our progress in understanding the organizational context for implementation and specifically the role of leadership in implementation, its role in sustainment has received little attention. This paper took a mixed-method approach to examine leadership during the sustainment phase of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Utilizing the Implementation Leadership Scale as a foundation, we sought to develop a short, practical measure of sustainment leadership that can be used for both applied and research purposes. Data for this study were collected as a part of a larger mixed-method study of evidence-based intervention, SafeCare®, sustainment. Quantitative data were collected from 157 providers using web-based surveys. Confirmatory factor analysis was used to examine the factor structure of the Sustainment Leadership Scale (SLS). Qualitative data were collected from 95 providers who participated in one of 15 focus groups. A framework approach guided qualitative data analysis. Mixed-method integration was also utilized to examine convergence of quantitative and qualitative findings. Confirmatory factor analysis supported the a priori higher order factor structure of the SLS with subscales indicating a single higher order sustainment leadership factor. The SLS demonstrated excellent internal consistency reliability. Qualitative analyses offered support for the dimensions of sustainment leadership captured by the quantitative measure, in addition to uncovering a fifth possible factor, available leadership. This study found qualitative and quantitative support for the pragmatic SLS measure. The SLS can be used for assessing leadership of first-level leaders to understand how staff perceive leadership during sustainment and to suggest areas where leaders could direct more attention in order to increase the likelihood that EBIs are institutionalized into the normal functioning of the organization.
NASA Astrophysics Data System (ADS)
Bozeman, Richard J., Jr.
1992-02-01
The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.
NASA Astrophysics Data System (ADS)
Bozeman, Richard J., Jr.
1994-05-01
The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.
Smart accelerometer. [vibration damage detection
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
The invention discloses methods and apparatus for detecting vibrations from machines which indicate an impending malfunction for the purpose of preventing additional damage and allowing for an orderly shutdown or a change in mode of operation. The method and apparatus is especially suited for reliable operation in providing thruster control data concerning unstable vibration in an electrical environment which is typically noisy and in which unrecognized ground loops may exist.
Reliability of a Measure of Institutional Discrimination against Minorities
1979-12-01
samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small
Cell tracking for cell image analysis
NASA Astrophysics Data System (ADS)
Bise, Ryoma; Sato, Yoichi
2017-04-01
Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.
Lee, Jinhyung; Berenson, Abbey B; Patel, Pooja R
2015-12-01
To determine demographical and behavioral characteristics associated with contraceptive use at coitarche, or first sexual experience, to determine which populations are at greatest risk of contraceptive nonuse during early sexual experiences. Cross-sectional study. We used the National Survey of Family Growth 2006-2010 database to abstract pertinent variables, including race, highest education, annual family income, parental living situation, importance of religion, age at coitarche, number of sexual partners, type of first contraception, and source of first contraception. Generalized linear models with logit link and binomial distribution were applied to examine the association between use of contraceptive methods at coitarche and the variables abstracted. Of the 5931 female participants included in the study, 1071 (18%) did not use contraceptive methods at coitarche. Only 199 (2%) of the female participants included in this study used the more reliable hormonal contraceptive methods at coitarche. Black females were significantly more likely than white females to use contraceptive methods at coitarche (p < 0.01). Females who initiated coitarche from 16 to 20 years of age were significantly more likely to use contraception at coitarche than females who had their first sexual experience at less than 16 years of age (p < 0.001). Females with greater educational background and greater family income were also significantly more likely to use contraception at coitarche (p < 0.001). Finally, females who obtained their first contraceptive methods from a spouse, partner, or friend were more likely to use contraception at coitarche than females who obtained their first method from a medical facility (p < 0.001). This study highlights several key differences between females who use contraceptive methods at coitarche versus those who do not. Greater effort needs to be focused on increasing access to more reliable contraceptive methods for young females, as females who obtain methods from nonmedical facilities are more likely to use contraceptive methods at coitarche.
Modelling utility-scale wind power plants. Part 1: Economics
NASA Astrophysics Data System (ADS)
Milligan, Michael R.
1999-10-01
As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.
Risky Group Decision-Making Method for Distribution Grid Planning
NASA Astrophysics Data System (ADS)
Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang
2015-12-01
With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.
Sicilia, Alvaro; González-Cutre, David
2011-05-01
The purpose of this study was to validate the Spanish version of the Exercise Dependence Scale-Revised (EDS-R). To achieve this goal, a sample of 531 sport center users was used and the psychometric properties of the EDS-R were examined through different analyses. The results supported both the first-order seven-factor model and the higher-order model (seven first-order factors and one second-order factor). The structure of both models was invariant across age. Correlations among the subscales indicated a related factor model, supporting construct validity of the scale. Alpha values over .70 (except for Reduction in Other Activities) and suitable levels of temporal stability were obtained. Users practicing more than three days per week had higher scores in all subscales than the group practicing with a frequency of three days or fewer. The findings of this study provided reliability and validity for the EDS-R in a Spanish context.
NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
NASA Astrophysics Data System (ADS)
Zhang, Zisheng; Li, Yanhu; Li, Jiaojiao; Liu, Zhiqiang; Li, Qing
2013-03-01
In order to improve the reliability, stability and automation of electrostatic precipitator, circuits of vibration motor for ESP and vibration control ladder diagram program are investigated using Schneider PLC with high performance and programming software of Twidosoft. Operational results show that after adopting PLC, vibration motor can run automatically; compared with traditional control system of vibration based on single-chip microcomputer, it has higher reliability, better stability and higher dust removal rate, when dust emission concentrations <= 50 mg m-3, providing a new method for vibration controlling of ESP.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Improving 1D Site Specific Velocity Profiles for the Kik-Net Network
NASA Astrophysics Data System (ADS)
Holt, James; Edwards, Benjamin; Pilz, Marco; Fäh, Donat; Rietbrock, Andreas
2017-04-01
Ground motion predication equations (GMPEs) form the cornerstone of modern seismic hazard assessments. When produced to a high standard they provide reliable estimates of ground motion/spectral acceleration for a given site and earthquake scenario. This information is crucial for engineers to optimise design and for regulators who enforce legal minimum safe design capacities. Classically, GMPEs were built upon the assumption that variability around the median model could be treated as aleatory. As understanding improved, it was noted that the propagation could be segregated into the response of the average path from the source and the response of the site. This is because the heterogeneity of the near-surface lithology is significantly different from that of the bulk path. It was then suggested that the semi-ergodic approach could be taken if the site response could be determined, moving uncertainty away from aleatory to epistemic. The determination of reliable site-specific response models is therefore becoming increasingly critical for ground motion models used in engineering practice. Today it is common practice to include proxies for site response within the scope of a GMPE, such as Vs30 or site classification, in an effort to reduce the overall uncertainty of the predication at a given site. However, these proxies are not always reliable enough to give confident ground motion estimates, due to the complexity of the near-surface. Other approaches of quantifying the response of the site include detailed numerical simulations (1/2/3D - linear, EQL, non-linear etc.). However, in order to be reliable, they require highly detailed and accurate velocity and, for non-linear analyses, material property models. It is possible to obtain this information through invasive methods, but is expensive, and not feasible for most projects. Here we propose an alternative method to derive reliable velocity profiles (and their uncertainty), calibrated using almost 20 years of recorded data from the Kik-Net network. First, using a reliable subset of sites, the empirical surface to borehole (S/B) ratio is calculated in the frequency domain using all events recorded at that site. In a subsequent step, we use numerical simulation to produce 1D SH transfer function curves using a suite of stochastic velocity models. Comparing the resulting amplification with the empirical S/B ratio we find optimal 1D velocity models and their uncertainty. The method will be tested to determine the level of initial information required to obtain a reliable Vs profile (e.g., starting Vs model, only Vs30, site-class, H/V ratio etc.) and then applied and tested against data from other regions using site-to-reference or empirical spectral model amplification.
Application of data fusion technology based on D-S evidence theory in fire detection
NASA Astrophysics Data System (ADS)
Cai, Zhishan; Chen, Musheng
2015-12-01
Judgment and identification based on single fire characteristic parameter information in fire detection is subject to environmental disturbances, and accordingly its detection performance is limited with the increase of false positive rate and false negative rate. The compound fire detector employs information fusion technology to judge and identify multiple fire characteristic parameters in order to improve the reliability and accuracy of fire detection. The D-S evidence theory is applied to the multi-sensor data-fusion: first normalize the data from all sensors to obtain the normalized basic probability function of the fire occurrence; then conduct the fusion processing using the D-S evidence theory; finally give the judgment results. The results show that the method meets the goal of accurate fire signal identification and increases the accuracy of fire alarm, and therefore is simple and effective.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
A reliable algorithm for optimal control synthesis
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1992-01-01
In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Savage, Michael; Zaretsky, Erwin V.
2015-01-01
The U.S. Space Shuttle fleet was originally intended to have a life of 100 flights for each vehicle, lasting over a 10-year period, with minimal scheduled maintenance or inspection. The first space shuttle flight was that of the Space Shuttle Columbia (OV-102), launched April 12, 1981. The disaster that destroyed Columbia occurred on its 28th flight, February 1, 2003, nearly 22 years after its first launch. In order to minimize risk of losing another Space Shuttle, a probabilistic life and reliability analysis was conducted for the Space Shuttle rudder/speed brake actuators to determine the number of flights the actuators could sustain. A life and reliability assessment of the actuator gears was performed in two stages: a contact stress fatigue model and a gear tooth bending fatigue model. For the contact stress analysis, the Lundberg-Palmgren bearing life theory was expanded to include gear-surface pitting for the actuator as a system. The mission spectrum of the Space Shuttle rudder/speed brake actuator was combined into equivalent effective hinge moment loads including an actuator input preload for the contact stress fatigue and tooth bending fatigue models. Gear system reliabilities are reported for both models and their combination. Reliability of the actuator bearings was analyzed separately, based on data provided by the actuator manufacturer. As a result of the analysis, the reliability of one half of a single actuator was calculated to be 98.6 percent for 12 flights. Accordingly, each actuator was subsequently limited to 12 flights before removal from service in the Space Shuttle.
The Effectiveness of Group Logo Therapy on the Hope among the Leukemic Patients
Ebrahimi, Nazila; Bahari, Farshad; Zare-Bahramabadi, Mehdi
2014-01-01
Background The present study has investigated the effectiveness of group logo therapy to increase the hope among the leukemic patients. Methods This research has composed of 80 leukemic patients who have referred to Golestan Hospital in 2012 fall, and then have responded to the Snyder’s Hope Scale, The research design has included pre - post - and follow up tests with a control group. First, both groups have responded to the pre-tests. Then the experimental group has received 10 sessions of counseling through group logo therapy; however, the control group has not received any specific training. Afterwards, both groups have undergone a post-test. After an interval of one month, follow-up tests have implemented in order to evaluate the permanency of the given tests .The SPSS software and covariance analysis tests have used in order to analyze the resulted tests data, and Cronbach alpha method has measured reliability coefficient. Results The research results have shown that logo therapy training might increase the hope of the leukemic patients (p<0.0001); moreover, permanency assessment of this study has shown the same result (p<0.0001). Conclusion Group logo therapy could be effective on the hope of the leukemic patients, and then this effect would be permanent. PMID:25250142
A Novel Test Method for Fuel Thermal Stability
1993-02-01
1 1.1. The Problem ................................................. . 1 1.2. The Innovations ...OPPORTUNITY ....................... 4 2.1. The Problem .................................................... 4 2.2. The Innovations and Opportunity for Sol...reliable instrument and test method to evaluate these fuels. 1.2. The Innovations The first innovation is the application of Fourier-Transform Infrared
Brosseau, Lucie; Laroche, Chantal; Guitard, Paulette; King, Judy; Poitras, Stéphane; Casimiro, Lynn; Barette, Julie Alexandra; Cardinal, Dominique; Cavallo, Sabrina; Laferrière, Lucie; Martini, Rose; Champoux, Nicholas; Taverne, Jennifer; Paquette, Chanyque; Tremblay, Sébastien; Sutton, Ann; Galipeau, Roseline; Tourigny, Jocelyne; Toupin-April, Karine; Loew, Laurianne; Demers, Catrine; Sauvé-Schenk, Katrine; Paquet, Nicole; Savard, Jacinthe; Lagacé, Josée; Pharand, Denyse; Vaillancourt, Véronique
2017-01-01
Objectives: The primary objective was to produce a French-Canadian translation of AMSTAR (a measurement tool to assess systematic reviews) and to examine the validity of the translation's contents. The secondary and tertiary objectives were to assess the inter-rater reliability and factorial construct validity of this French-Canadian version of AMSTAR. Methods: A modified approach to Vallerand's methodology (1989) for cross-cultural validation was used. 1 First, a parallel back-translation of AMSTAR 2 was performed, by both professionals and future professionals. Next, a first committee of experts (P1) examined the translations to create a first draft of the French-Canadian version of the AMSTAR tool. This draft was then evaluated and modified by a second committee of experts (P2). Following that, 18 future professionals (master's students in physiotherapy) rated this second draft of the instrument for clarity using a seven-point scale (1: very clear; 7: very ambiguous). Lastly, the principal co-investigators then reviewed the problematic elements and proposed final changes. Four independent raters used this French-Canadian version of AMSTAR to assess 20 systematic reviews that were published in French after the year 2000. An intraclass correlation coefficient (ICC) and kappa coefficient were calculated to measure the tool's inter-rater reliability. A Cronbach's alpha coefficient was also calculated to measure internal consistency. In addition, factor analysis was used to evaluate construct validity in order to determine the number of dimensions. Results: The statements on the final version of the AMSTAR tool received an average ambiguity rating of between 1.0 and 1.4. No statement received an average rating below 1.4, which indicates a high level of clarity. Inter-rater reliability ( n =4) for the instrument's total score was moderate, with an intraclass correlation coefficient of 0.61 (95% confidence interval [CI]: 0.29, 0.97). Inter-rater reliability for 82% of the individual items was good, according to the kappa values obtained. Internal consistency was excellent, with a Cronbach's alpha coefficient of 0.91 (95% CI: 0.83, 0.99). The French-Canadian version of AMSTAR is a unidimensional tool, as confirmed by factor analysis and community values greater than 0.30. Conclusion: A valid French-Canadian version of AMSTAR was created using this rigorous five-step process. This version is unidimensional, with moderate inter-rater reliability for the elements overall, and with excellent internal consistency. This tool could be valuable to French-Canadian professionals and researchers, and could also be of interest to the international Francophone community.
Fast grasping of unknown objects using cylinder searching on a single point cloud
NASA Astrophysics Data System (ADS)
Lei, Qujiang; Wisse, Martijn
2017-03-01
Grasping of unknown objects with neither appearance data nor object models given in advance is very important for robots that work in an unfamiliar environment. The goal of this paper is to quickly synthesize an executable grasp for one unknown object by using cylinder searching on a single point cloud. Specifically, a 3D camera is first used to obtain a partial point cloud of the target unknown object. An original method is then employed to do post treatment on the partial point cloud to minimize the uncertainty which may lead to grasp failure. In order to accelerate the grasp searching, surface normal of the target object is then used to constrain the synthetization of the cylinder grasp candidates. Operability analysis is then used to select out all executable grasp candidates followed by force balance optimization to choose the most reliable grasp as the final grasp execution. In order to verify the effectiveness of our algorithm, Simulations on a Universal Robot arm UR5 and an under-actuated Lacquey Fetch gripper are used to examine the performance of this algorithm, and successful results are obtained.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Assessing the evolutionary rate of positional orthologous genes in prokaryotes using synteny data
Lemoine, Frédéric; Lespinet, Olivier; Labedan, Bernard
2007-01-01
Background Comparison of completely sequenced microbial genomes has revealed how fluid these genomes are. Detecting synteny blocks requires reliable methods to determining the orthologs among the whole set of homologs detected by exhaustive comparisons between each pair of completely sequenced genomes. This is a complex and difficult problem in the field of comparative genomics but will help to better understand the way prokaryotic genomes are evolving. Results We have developed a suite of programs that automate three essential steps to study conservation of gene order, and validated them with a set of 107 bacteria and archaea that cover the majority of the prokaryotic taxonomic space. We identified the whole set of shared homologs between two or more species and computed the evolutionary distance separating each pair of homologs. We applied two strategies to extract from the set of homologs a collection of valid orthologs shared by at least two genomes. The first computes the Reciprocal Smallest Distance (RSD) using the PAM distances separating pairs of homologs. The second method groups homologs in families and reconstructs each family's evolutionary tree, distinguishing bona fide orthologs as well as paralogs created after the last speciation event. Although the phylogenetic tree method often succeeds where RSD fails, the reverse could occasionally be true. Accordingly, we used the data obtained with either methods or their intersection to number the orthologs that are adjacent in for each pair of genomes, the Positional Orthologous Genes (POGs), and to further study their properties. Once all these synteny blocks have been detected, we showed that POGs are subject to more evolutionary constraints than orthologs outside synteny groups, whichever the taxonomic distance separating the compared organisms. Conclusion The suite of programs described in this paper allows a reliable detection of orthologs and is useful for evaluating gene order conservation in prokaryotes whichever their taxonomic distance. Thus, our approach will make easy the rapid identification of POGS in the next few years as we are expecting to be inundated with thousands of completely sequenced microbial genomes. PMID:18047665
Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory
NASA Astrophysics Data System (ADS)
Usselman, Austin
We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one and two particles to be created by the new operator and converged to the Fock state expansion results. This showed the LFCC method to be a reliable replacement method for solving quantum field theory problems.
Nuclear surface diffuseness revealed in nucleon-nucleus diffraction
NASA Astrophysics Data System (ADS)
Hatakeyama, S.; Horiuchi, W.; Kohama, A.
2018-05-01
The nuclear surface provides useful information on nuclear radius, nuclear structure, as well as properties of nuclear matter. We discuss the relationship between the nuclear surface diffuseness and elastic scattering differential cross section at the first diffraction peak of high-energy nucleon-nucleus scattering as an efficient tool in order to extract the nuclear surface information from limited experimental data involving short-lived unstable nuclei. The high-energy reaction is described by a reliable microscopic reaction theory, the Glauber model. Extending the idea of the black sphere model, we find one-to-one correspondence between the nuclear bulk structure information and proton-nucleus elastic scattering diffraction peak. This implies that we can extract both the nuclear radius and diffuseness simultaneously, using the position of the first diffraction peak and its magnitude of the elastic scattering differential cross section. We confirm the reliability of this approach by using realistic density distributions obtained by a mean-field model.
Smart substrates: Making multi-chip modules smarter
NASA Astrophysics Data System (ADS)
Wunsch, T. F.; Treece, R. K.
1995-05-01
A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.
USDA-ARS?s Scientific Manuscript database
Rangeland environments are particularly susceptible to erosion due to extreme rainfall events and low vegetation cover. Landowners and managers need access to reliable erosion evaluation methods in order to protect productivity and hydrologic integrity of their rangelands and make resource allocati...
78 FR 48422 - Agency Information Collection Activities: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
... quantitative data through surveys with working-age (age 18-61) and older American (age 62 and older) consumers in order to develop and refine survey instruments that will enable the CFPB to reliably and... conducting research to identify methods and strategies to educate and counsel seniors, and developing goals...
Murphy, Douglas J; Bruce, David A; Mercer, Stewart W; Eva, Kevin W
2009-05-01
To investigate the reliability and feasibility of six potential workplace-based assessment methods in general practice training: criterion audit, multi-source feedback from clinical and non-clinical colleagues, patient feedback (the CARE Measure), referral letters, significant event analysis, and video analysis of consultations. Performance of GP registrars (trainees) was evaluated with each tool to assess the reliabilities of the tools and feasibility, given raters and number of assessments needed. Participant experience of process determined by questionnaire. 171 GP registrars and their trainers, drawn from nine deaneries (representing all four countries in the UK), participated. The ability of each tool to differentiate between doctors (reliability) was assessed using generalisability theory. Decision studies were then conducted to determine the number of observations required to achieve an acceptably high reliability for "high-stakes assessment" using each instrument. Finally, descriptive statistics were used to summarise participants' ratings of their experience using these tools. Multi-source feedback from colleagues and patient feedback on consultations emerged as the two methods most likely to offer a reliable and feasible opinion of workplace performance. Reliability co-efficients of 0.8 were attainable with 41 CARE Measure patient questionnaires and six clinical and/or five non-clinical colleagues per doctor when assessed on two occasions. For the other four methods tested, 10 or more assessors were required per doctor in order to achieve a reliable assessment, making the feasibility of their use in high-stakes assessment extremely low. Participant feedback did not raise any major concerns regarding the acceptability, feasibility, or educational impact of the tools. The combination of patient and colleague views of doctors' performance, coupled with reliable competence measures, may offer a suitable evidence-base on which to monitor progress and completion of doctors' training in general practice.
Inter-rater reliability of an observation-based ergonomics assessment checklist for office workers.
Pereira, Michelle Jessica; Straker, Leon Melville; Comans, Tracy Anne; Johnston, Venerina
2016-12-01
To establish the inter-rater reliability of an observation-based ergonomics assessment checklist for computer workers. A 37-item (38-item if a laptop was part of the workstation) comprehensive observational ergonomics assessment checklist comparable to government guidelines and up to date with empirical evidence was developed. Two trained practitioners assessed full-time office workers performing their usual computer-based work and evaluated the suitability of workstations used. Practitioners assessed each participant consecutively. The order of assessors was randomised, and the second assessor was blinded to the findings of the first. Unadjusted kappa coefficients between the raters were obtained for the overall checklist and subsections that were formed from question-items relevant to specific workstation equipment. Twenty-seven office workers were recruited. The inter-rater reliability between two trained practitioners achieved moderate to good reliability for all except one checklist component. This checklist has mostly moderate to good reliability between two trained practitioners. Practitioner Summary: This reliable ergonomics assessment checklist for computer workers was designed using accessible government guidelines and supplemented with up-to-date evidence. Employers in Queensland (Australia) can fulfil legislative requirements by using this reliable checklist to identify and subsequently address potential risk factors for work-related injury to provide a safe working environment.
76 FR 71966 - Notice of Commissioner and Staff Attendance at ReliabilityFirst
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-21
... Attendance at ReliabilityFirst Corporation Meetings The Federal Energy Regulatory Commission hereby gives...: ReliabilityFirst Corporation, Annual Meeting of Members and Board of Directors Meetings, Grand Hyatt... Reliability Corporation Docket No. RC11-1, North American Electric Reliability Corporation Docket No. RC11-2...
Heterogeneous dissipative composite structures
NASA Astrophysics Data System (ADS)
Ryabov, Victor; Yartsev, Boris; Parshina, Ludmila
2018-05-01
The paper suggests mathematical models of decaying vibrations in layered anisotropic plates and orthotropic rods based on Hamilton variation principle, first-order shear deformation laminated plate theory (FSDT), as well as on the viscous-elastic correspondence principle of the linear viscoelasticity theory. In the description of the physical relationships between the materials of the layers forming stiff polymeric composites, the effect of vibration frequency and ambient temperature is assumed as negligible, whereas for the viscous-elastic polymer layer, temperature-frequency relationship of elastic dissipation and stiffness properties is considered by means of the experimentally determined generalized curves. Mitigation of Hamilton functional makes it possible to describe decaying vibration of anisotropic structures by an algebraic problem of complex eigenvalues. The system of algebraic equation is generated through Ritz method using Legendre polynomials as coordinate functions. First, real solutions are found. To find complex natural frequencies of the system, the obtained real natural frequencies are taken as input values, and then, by means of the 3rd order iteration method, complex natural frequencies are calculated. The paper provides convergence estimates for the numerical procedures. Reliability of the obtained results is confirmed by a good correlation between analytical and experimental values of natural frequencies and loss factors in the lower vibration tones for the two series of unsupported orthotropic rods formed by stiff GRP and CRP layers and a viscoelastic polymer layer. Analysis of the numerical test data has shown the dissipation & stiffness properties of heterogeneous composite plates and rods to considerably depend on relative thickness of the viscoelastic polymer layer, orientation of stiff composite layers, vibration frequency and ambient temperature.
Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko
2012-02-24
This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Goh, A. T. C.; Kulhawy, F. H.
2005-05-01
In urban environments, one major concern with deep excavations in soft clay is the potentially large ground deformations in and around the excavation. Excessive movements can damage adjacent buildings and utilities. There are many uncertainties associated with the calculation of the ultimate or serviceability performance of a braced excavation system. These include the variabilities of the loadings, geotechnical soil properties, and engineering and geometrical properties of the wall. A risk-based approach to serviceability performance failure is necessary to incorporate systematically the uncertainties associated with the various design parameters. This paper demonstrates the use of an integrated neural network-reliability method to assess the risk of serviceability failure through the calculation of the reliability index. By first performing a series of parametric studies using the finite element method and then approximating the non-linear limit state surface (the boundary separating the safe and failure domains) through a neural network model, the reliability index can be determined with the aid of a spreadsheet. Two illustrative examples are presented to show how the serviceability performance for braced excavation problems can be assessed using the reliability index.
Empirical methods for assessing meaningful neuropsychological change following epilepsy surgery.
Sawrie, S M; Chelune, G J; Naugle, R I; Lüders, H O
1996-11-01
Traditional methods for assessing the neurocognitive effects of epilepsy surgery are confounded by practice effects, test-retest reliability issues, and regression to the mean. This study employs 2 methods for assessing individual change that allow direct comparison of changes across both individuals and test measures. Fifty-one medically intractable epilepsy patients completed a comprehensive neuropsychological battery twice, approximately 8 months apart, prior to any invasive monitoring or surgical intervention. First, a Reliable Change (RC) index score was computed for each test score to take into account the reliability of that measure, and a cutoff score was empirically derived to establish the limits of statistically reliable change. These indices were subsequently adjusted for expected practice effects. The second approach used a regression technique to establish "change norms" along a common metric that models both expected practice effects and regression to the mean. The RC index scores provide the clinician with a statistical means of determining whether a patient's retest performance is "significantly" changed from baseline. The regression norms for change allow the clinician to evaluate the magnitude of a given patient's change on 1 or more variables along a common metric that takes into account the reliability and stability of each test measure. Case data illustrate how these methods provide an empirically grounded means for evaluating neurocognitive outcomes following medical interventions such as epilepsy surgery.
Benej, Martin; Bendlova, Bela; Vaclavikova, Eliska; Poturnajova, Martina
2011-10-06
Reliable and effective primary screening of mutation carriers is the key condition for common diagnostic use. The objective of this study is to validate the method high resolution melting (HRM) analysis for routine primary mutation screening and accomplish its optimization, evaluation and validation. Due to their heterozygous nature, germline point mutations of c-RET proto-oncogene, associated to multiple endocrine neoplasia type 2 (MEN2), are suitable for HRM analysis. Early identification of mutation carriers has a major impact on patients' survival due to early onset of medullary thyroid carcinoma (MTC) and resistance to conventional therapy. The authors performed a series of validation assays according to International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines for validation of analytical procedures, along with appropriate design and optimization experiments. After validated evaluation of HRM, the method was utilized for primary screening of 28 pathogenic c-RET mutations distributed among nine exons of c-RET gene. Validation experiments confirm the repeatability, robustness, accuracy and reproducibility of HRM. All c-RET gene pathogenic variants were detected with no occurrence of false-positive/false-negative results. The data provide basic information about design, establishment and validation of HRM for primary screening of genetic variants in order to distinguish heterozygous point mutation carriers among the wild-type sequence carriers. HRM analysis is a powerful and reliable tool for rapid and cost-effective primary screening, e.g., of c-RET gene germline and/or sporadic mutations and can be used as a first line potential diagnostic tool.
Synchronous Control Method and Realization of Automated Pharmacy Elevator
NASA Astrophysics Data System (ADS)
Liu, Xiang-Quan
Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.
Chaos-induced modulation of reliability boosts output firing rate in downstream cortical areas.
Tiesinga, P H E
2004-03-01
The reproducibility of neural spike train responses to an identical stimulus across different presentations (trials) has been studied extensively. Reliability, the degree of reproducibility of spike trains, was found to depend in part on the amplitude and frequency content of the stimulus [J. Hunter and J. Milton, J. Neurophysiol. 90, 387 (2003)]. The responses across different trials can sometimes be interpreted as the response of an ensemble of similar neurons to a single stimulus presentation. How does the reliability of the activity of neural ensembles affect information transmission between different cortical areas? We studied a model neural system consisting of two ensembles of neurons with Hodgkin-Huxley-type channels. The first ensemble was driven by an injected sinusoidal current that oscillated in the gamma-frequency range (40 Hz) and its output spike trains in turn drove the second ensemble by fast excitatory synaptic potentials with short term depression. We determined the relationship between the reliability of the first ensemble and the response of the second ensemble. In our paradigm the neurons in the first ensemble were initially in a chaotic state with unreliable and imprecise spike trains. The neurons became entrained to the oscillation and responded reliably when the stimulus power was increased by less than 10%. The firing rate of the first ensemble increased by 30%, whereas that of the second ensemble could increase by an order of magnitude. We also determined the response of the second ensemble when its input spike trains, which had non-Poisson statistics, were replaced by an equivalent ensemble of Poisson spike trains. The resulting output spike trains were significantly different from the original response, as assessed by the metric introduced by Victor and Purpura [J. Neurophysiol. 76, 1310 (1996)]. These results are a proof of principle that weak temporal modulations in the power of gamma-frequency oscillations in a given cortical area can strongly affect firing rate responses downstream by way of reliability in spite of rather modest changes in firing rate in the originating area.
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
Didactic satellite based on Android platform for space operation demonstration and development
NASA Astrophysics Data System (ADS)
Ben Bahri, Omar; Besbes, Kamel
2018-03-01
Space technology plays a pivotal role in society development. It offers new methods for telemetry, monitoring and control. However, this sector requires training, research and skills development but the lack of instruments, materials and budgets affects the ambiguity to understand satellite technology. The objective of this paper is to describe a demonstration prototype of a smart phone device for space operations study. Therefore, the first task was carried out to give a demonstration for spatial imagery and attitude determination missions through a wireless communication. The smart phone's Bluetooth was used to achieve this goal inclusive of a new method to enable real time transmission. In addition, an algorithm around a quaternion based Kalman filter was included in order to detect the reliability of the prototype's orientation. The second task was carried out to provide a demonstration for the attitude control mission using the smart phone's orientation sensor, including a new method for an autonomous guided mode. As a result, the acquisition platform showed real time measurement with good accuracy for orientation detection and image transmission. In addition, the prototype kept the balance during the demonstration based on the attitude control method.
Analysis of multiple soybean phytonutrients by near-infrared reflectance spectroscopy.
Zhang, Gaoyang; Li, Penghui; Zhang, Wenfei; Zhao, Jian
2017-05-01
Improvement of the nutritional quality of soybean is usually facilitated by a vast range of soybean germplasm with enough information about their multiple phytonutrients. In order to acquire this essential information from a huge number of soybean samples, a rapid analytic method is urgently required. Here, a nondestructive near-infrared reflectance spectroscopy (NIRS) method was developed for rapid and accurate measurement of 25 nutritional components in soybean simultaneously, including fatty acids palmitic acid, stearic acid, oleic acid, linoleic acid, and linolenic acid, vitamin E (VE), α-VE, γ-VE, δ-VE, saponins, isoflavonoids, and flavonoids. Modified partial least squares regression and first, second, third, and fourth derivative transformation was applied for the model development. The 1 minus variance ratio (1-VR) value of the optimal model can reach between the highest 0.95 and lowest 0.64. The predicted values of phytonutrients in soybean using NIRS technology are comparable to those obtained from using the traditional spectrum or chemical methods. A robust NIRS can be adopted as a reliable method to evaluate complex plant constituents for screening large-scale samples of soybean germplasm resources or genetic populations for improvement of nutritional qualities. Graphical Abstract ᅟ.
2010-01-01
Background Tasks chosen to evaluate motor performance should reflect the movement deficits characteristic of the target population and present an appropriate challenge for the patients who would be evaluated. A reaching task that evaluates impairment characteristics of people with shoulder impingement syndrome (SIS) was developed to evaluate the motor performance of this population. The objectives of this study were to characterize the reproducibility of this reaching task in people with and without SIS and to evaluate the impact of the number of trials on reproducibility. Methods Thirty subjects with SIS and twenty healthy subjects participated in the first measurement session to evaluate intrasession reliability. Ten healthy subjects were retested within 2 to 7 days to assess intersession reliability. At each measurement session, upper extremity kinematic patterns were evaluated during a reaching task. Ten trials were recorded. Thereafter, the upper extremity position at the end of reaching and total joint excursion that occurred during reaching were calculated. Intraclass correlation coefficient (ICC) and minimal detectable change (MDC) were used to estimate intra and intersession reliability. Results Intrasession reliability for total joint excursion was good to very good when based on the first two trials (0.77
NASA Astrophysics Data System (ADS)
Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang
2012-07-01
Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding-cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are - 15 dB and - 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( - 55 °C-80 °C ) and strength durability (160-1600μɛ, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life.
Radiographic measurement reliability of lumbar lordosis in ankylosing spondylitis.
Lee, Jung Sub; Goh, Tae Sik; Park, Shi Hwan; Lee, Hong Seok; Suh, Kuen Tak
2013-04-01
Intraobserver and interobserver reliabilities of the several different methods to measure lumbar lordosis have been reported. However, it has not been studied sofar in patients with ankylosing spondylitis (AS). We evaluated the inter and intraobserver reliabilities of six specific measures of global lumbar lordosis in patients with AS. Ninety-one consecutive patients with AS who met the most recently modified New York criteria were enrolled and underwent anteroposterior and lateral radiographs of whole spine. The radiographs were divided into non-ankylosis (no bony bridge in the lumbar spine), incomplete ankylosis (lumbar spines were partially connected by bony bridge) and complete ankylosis groups to evaluate the reliability of the Cobb L1-S1, Cobb L1-L5, centroid, posterior tangent L1-S1, posterior tangent L1-L5, and TRALL methods. The radiographs were composed of 39 non-ankylosis, 27 incomplete ankylosis and 25 complete ankylosis. Intra- and inter-class correlation coefficients (ICCs) of all six methods were generally high. The ICCs were all ≥0.77 (excellent) for the six radiographic methods in the combined group. However, a comparison of the ICCs, 95 % confidence intervals and mean absolute difference (MAD) between groups with varying degrees of ankylosis showed that the reliability of the lordosis measurements decreased in proportion to the severity of ankylosis. The Cobb L1-S1, Cobb L1-L5 and posterior tangent L1-S1 method demonstrated higher ICCs for both inter and intraobserver comparisons and the other methods showed lower ICCs in all groups. The intraobserver MAD was similar in the Cobb L1-S1 and Cobb L1-L5 (2.7°-4.3°), but the other methods showed higher intraobserver MAD. Interobserver MAD of Cobb L1-L5 only showed low in all group. These results are the first to provide a reliability analysis of different global lumbar lordosis measurement methods in AS. The findings in this study demonstrated that the Cobb L1-L5 method is reliable for measuring the global lumbar lordosis in AS.
Short assessment of the Big Five: robust across survey methods except telephone interviewing.
Lang, Frieder R; John, Dennis; Lüdtke, Oliver; Schupp, Jürgen; Wagner, Gert G
2011-06-01
We examined measurement invariance and age-related robustness of a short 15-item Big Five Inventory (BFI-S) of personality dimensions, which is well suited for applications in large-scale multidisciplinary surveys. The BFI-S was assessed in three different interviewing conditions: computer-assisted or paper-assisted face-to-face interviewing, computer-assisted telephone interviewing, and a self-administered questionnaire. Randomized probability samples from a large-scale German panel survey and a related probability telephone study were used in order to test method effects on self-report measures of personality characteristics across early, middle, and late adulthood. Exploratory structural equation modeling was used in order to test for measurement invariance of the five-factor model of personality trait domains across different assessment methods. For the short inventory, findings suggest strong robustness of self-report measures of personality dimensions among young and middle-aged adults. In old age, telephone interviewing was associated with greater distortions in reliable personality assessment. It is concluded that the greater mental workload of telephone interviewing limits the reliability of self-report personality assessment. Face-to-face surveys and self-administrated questionnaire completion are clearly better suited than phone surveys when personality traits in age-heterogeneous samples are assessed.
Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta
2018-05-15
Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.
Light aeroplane engine development
NASA Technical Reports Server (NTRS)
Fell, L F R
1925-01-01
It has frequently been stated and written that in order to popularize light aircraft the first essential is the production of a reliable engine capable of being easily maintained and having a long life, at the same time selling at a low figure. It is desired to point out the difficulties in the way of realizing this ideal before remarking on the claims of the various types for adoption.
Susan J. Prichard; Eva C. Karau; Roger D. Ottmar; Maureen C. Kennedy; James B. Cronan; Clinton S. Wright; Robert E. Keane
2014-01-01
Reliable predictions of fuel consumption are critical in the eastern United States (US), where prescribed burning is frequently applied to forests and air quality is of increasing concern. CONSUME and the First Order Fire Effects Model (FOFEM), predictive models developed to estimate fuel consumption and emissions from wildland fires, have not been systematically...
ERIC Educational Resources Information Center
Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin
2013-01-01
The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…
Reliable femoral frame construction based on MRI dedicated to muscles position follow-up.
Dubois, G; Bonneau, D; Lafage, V; Rouch, P; Skalli, W
2015-10-01
In vivo follow-up of muscle shape variation represents a challenge when evaluating muscle development due to disease or treatment. Recent developments in muscles reconstruction techniques indicate MRI as a clinical tool for the follow-up of the thigh muscles. The comparison of 3D muscles shape from two different sequences is not easy because there is no common frame. This study proposes an innovative method for the reconstruction of a reliable femoral frame based on the femoral head and both condyles centers. In order to robustify the definition of condylar spheres, an original method was developed to combine the estimation of diameters of both condyles from the lateral antero-posterior distance and the estimation of the spheres center from an optimization process. The influence of spacing between MR slices and of origin positions was studied. For all axes, the proposed method presented an angular error lower than 1° with spacing between slice of 10 mm and the optimal position of the origin was identified at 56 % of the distance between the femoral head center and the barycenter of both condyles. The high reliability of this method provides a robust frame for clinical follow-up based on MRI .
NASA Astrophysics Data System (ADS)
Maalek, R.; Lichti, D. D.; Ruwanpura, J.
2015-08-01
The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
Jakupovic, Vedran; Solakovic, Suajb; Celebic, Nedim; Kulovic, Dzenan
2018-01-01
Introduction: Diabetes is progressive condition which requires various ways of treatment. Adequate therapy prescribed in the right time helps patient to postpone development of complications. Adherence to complicated therapy is challenge for both patients and HCPs and is subject of research in many disciplines. Improvement in communication between HCP and patients is very important in patient’s adherence to therapy. Aim: Aim of this research was to explore validity and reliability of modified SERVQUAL instrument in attempt to explore ways of motivating diabetic patient to accept prescribed insulin therapy. Material and Methods: We used modified SERVQUAL questionnaire as instrument in the research. It was necessary to check validity and reliability of the new modified instrument. Results: Results show that modified Servqual instrument has excellent reliability (α=0.908), so we could say that it measures precisely Expectations, Perceptions and Motivation at patients. Factor analysis (EFA method) with Varimax rotation extracted 4 factors which together explain 52.902% variance of the results on this subscale. Bifactorial solution could be seen on Scree-plot diagram (break at second factor). Conclusion: Results in this research show that modified Servqual instrument which is created in order to measure expectations and perceptions of the patients is valid and reliable. Reliability and validity are proven indeed in additional dimension which was created originally for this research - motivation to accept insulin therapy. PMID:29670478
NASA Astrophysics Data System (ADS)
Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan
2017-12-01
Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.
First order augmentation to tensor voting for boundary inference and multiscale analysis in 3D.
Tong, Wai-Shun; Tang, Chi-Keung; Mordohai, Philippos; Medioni, Gérard
2004-05-01
Most computer vision applications require the reliable detection of boundaries. In the presence of outliers, missing data, orientation discontinuities, and occlusion, this problem is particularly challenging. We propose to address it by complementing the tensor voting framework, which was limited to second order properties, with first order representation and voting. First order voting fields and a mechanism to vote for 3D surface and volume boundaries and curve endpoints in 3D are defined. Boundary inference is also useful for a second difficult problem in grouping, namely, automatic scale selection. We propose an algorithm that automatically infers the smallest scale that can preserve the finest details. Our algorithm then proceeds with progressively larger scales to ensure continuity where it has not been achieved. Therefore, the proposed approach does not oversmooth features or delay the handling of boundaries and discontinuities until model misfit occurs. The interaction of smooth features, boundaries, and outliers is accommodated by the unified representation, making possible the perceptual organization of data in curves, surfaces, volumes, and their boundaries simultaneously. We present results on a variety of data sets to show the efficacy of the improved formalism.
Zarei, Fatemeh; Solhi, Mahnaz; Merghati-Khoei, Effat; Taghdisi, Mohammad Hossein; Shojaeizadeh, Davoud; Taket, Ann Rosemary; Masoomi, Razieh; Nedjat, Saharnaz
2017-05-01
Divorce, especially in women, could be assessed from socio-cultural perspective as well as psychological viewpoint. This assessment requires cultural adopted as well as valid and reliable questionnaire. This study aimed to develop and assess the psychometric properties of a questionnaire in order to address social consequences in Iranian divorced women. This was an exploratory mixed method study conducted during 2012 to 2014. According to the grounded theory approach in the first phase, social exclusion was extracted as a core of understanding process in participants. Based on, 47 preliminary generated items reliability and validity were assessed. In the second phase, the divorced women were recruited from a safe community center in Tehran through convenience sampling. Exploratory factor analysis conducted on the questionnaires of 150 divorced women with mean age 41.76±8.49 yr, in that, indicated five dimensions, discriminative marital status, economic dependence on marital status, exclusionary marital status, and traumatic marital status health risks and, frightening marital status that jointly accounted for the 64% of the variance observed. An expert panel approved the face and content validity of the developed tool. The Cronbach's alpha coefficient and the Intra-class Correlation Coefficient were found to be 0.70 and 0.85, respectively. The present study provided a valid and reliable measure as Social Exclusion Questionnaire in Iranian divorced women (SEQ-IDW) to address social post-divorce consequences, which might help to improve women's social health.
Emergency First Responders' Experience with Colorimetric Detection Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandra L. Fox; Keith A. Daum; Carla J. Miller
2007-10-01
Nationwide, first responders from state and federal support teams respond to hazardous materials incidents, industrial chemical spills, and potential weapons of mass destruction (WMD) attacks. Although first responders have sophisticated chemical, biological, radiological, and explosive detectors available for assessment of the incident scene, simple colorimetric detectors have a role in response actions. The large number of colorimetric chemical detection methods available on the market can make the selection of the proper methods difficult. Although each detector has unique aspects to provide qualitative or quantitative data about the unknown chemicals present, not all detectors provide consistent, accurate, and reliable results. Includedmore » here, in a consumer-report-style format, we provide “boots on the ground” information directly from first responders about how well colorimetric chemical detection methods meet their needs in the field and how they procure these methods.« less
Nakling, Jakob; Buhaug, Harald; Backe, Bjorn
2005-10-01
In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.
Barium Nitrate Raman Laser Development for Remote Sensing of Ozone
NASA Technical Reports Server (NTRS)
McCray, Christopher L.; Chyba, Thomas H.
1997-01-01
In order to understand the impact of anthropogenic emissions upon the earth's environment, scientists require remote sensing techniques which are capable of providing range-resolved measurements of clouds, aerosols, and the concentrations of several chemical constituents of the atmosphere. The differential absorption lidar (DIAL) technique is a very promising method to measure concentration profiles of chemical species such as ozone and water vapor as well as detect the presence of aerosols and clouds. If a suitable DIAL system could be deployed in space, it would provide a global data set of tremendous value. Such systems, however, need to be compact, reliable, and very efficient. In order to measure atmospheric gases with the DIAL technique, the laser transmitter must generate suitable on-line and off-line wavelength pulse pairs. The on-line pulse is resonant with an absorption feature of the species of interest. The off-line pulse is tuned so that it encounters significantly less absorption. The relative backscattered power for the two pulses enables the range-resolved concentration to be computed. Preliminary experiments at NASA LaRC suggested that the solid state Raman shifting material, Ba(NO3)2, could be utilized to produce these pulse pairs. A Raman oscillator pumped at 532 nm by a frequency-doubled Nd:YAG laser can create first Stokes laser output at 563 nm and second Stokes output at 599 nm. With frequency doublers, UV output at 281 nm and 299 nm can be subsequently obtained. This all-solid state system has the potential to be very efficient, compact, and reliable. Raman shifting in Ba(NO3)2, has previously been performed in both the visible and the infrared. The first Raman oscillator in the visible region was investigated in 1986 with the configurations of plane-plane and unstable telescopic resonators. However, most of the recent research has focused on the development of infrared sources for eye-safe lidar applications.
Probabilistic fatigue methodology for six nines reliability
NASA Technical Reports Server (NTRS)
Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf
1990-01-01
Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Wang, Shuoliang; Liu, Pengcheng; Zhao, Hui; Zhang, Yuan
2017-11-29
Micro-tube experiment has been implemented to understand the mechanisms of governing microcosmic fluid percolation and is extensively used in both fields of micro electromechanical engineering and petroleum engineering. The measured pressure difference across the microtube is not equal to the actual pressure difference across the microtube. Taking into account the additional pressure losses between the outlet of the micro tube and the outlet of the entire setup, we propose a new method for predicting the dynamic capillary pressure using the Level-set method. We first demonstrate it is a reliable method for describing microscopic flow by comparing the micro-model flow-test results against the predicted results using the Level-set method. In the proposed approach, Level-set method is applied to predict the pressure distribution along the microtube when the fluids flow along the microtube at a given flow rate; the microtube used in the calculation has the same size as the one used in the experiment. From the simulation results, the pressure difference across a curved interface (i.e., dynamic capillary pressure) can be directly obtained. We also show that dynamic capillary force should be properly evaluated in the micro-tube experiment in order to obtain the actual pressure difference across the microtube.
NASA Astrophysics Data System (ADS)
Stuchi, Teresa; Cardozo Dias, P.
2013-05-01
Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.
NASA Astrophysics Data System (ADS)
Cardozo Dias, Penha Maria; Stuchi, T. J.
2013-11-01
In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.
Cohen, Emmanuel; Bernard, Jonathan Y.; Ponty, Amandine; Ndao, Amadou; Amougou, Norbert; Saïd-Mohamed, Rihlat; Pasquet, Patrick
2015-01-01
Background The social valorisation of overweight in African populations could promote high-risk eating behaviours and therefore become a risk factor of obesity. However, existing scales to assess body image are usually not accurate enough to allow comparative studies of body weight perception in different African populations. This study aimed to develop and validate the Body Size Scale (BSS) to estimate African body weight perception. Methods Anthropometric measures of 80 Cameroonians and 81 Senegalese were used to evaluate three criteria of adiposity: body mass index (BMI), overall percentage of fat, and endomorphy (fat component of the somatotype). To develop the BSS, the participants were photographed in full face and profile positions. Models were selected for their representativeness of the wide variability in adiposity with a progressive increase along the scale. Then, for the validation protocol, participants self-administered the BSS to assess self-perceived current body size (CBS), desired body size (DBS) and provide a “body self-satisfaction index.” This protocol included construct validity, test-retest reliability and convergent validity and was carried out with three independent samples of respectively 201, 103 and 1115 Cameroonians. Results The BSS comprises two sex-specific scales of photos of 9 models each, and ordered by increasing adiposity. Most participants were able to correctly order the BSS by increasing adiposity, using three different words to define body size. Test-retest reliability was consistent in estimating CBS, DBS and the “body self-satisfaction index.” The CBS was highly correlated to the objective BMI, and two different indexes assessed with the BSS were consistent with declarations obtained in interviews. Conclusion The BSS is the first scale with photos of real African models taken in both full face and profile and representing a wide and representative variability in adiposity. The validation protocol proved its reliability for estimating body weight perception in Africans. PMID:26536030