ERIC Educational Resources Information Center
Kang, Namjun
If content analysis is to satisfy the requirement of objectivity, measures and procedures must be reliable. Reliability is usually measured by the proportion of agreement of all categories identically coded by different coders. For such data to be empirically meaningful, a high degree of inter-coder reliability must be demonstrated. Researchers in…
Bio-Oil Analysis Laboratory Procedures | Bioenergy | NREL
Bio-Oil Analysis Laboratory Procedures Bio-Oil Analysis Laboratory Procedures NREL develops standard procedures have been validated and allow for reliable bio-oil analysis. Procedures Determination different hydroxyl groups (-OH) in pyrolysis bio-oil: aliphatic-OH, phenolic-OH, and carboxylic-OH. Download
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis
NASA Technical Reports Server (NTRS)
Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William
2009-01-01
This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).
Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2010-01-01
A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…
Meta-Analysis of Scale Reliability Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2013-01-01
A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…
5 CFR 841.411 - Appeals procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... agency's actuarial analysis are sufficient and reliable (As a general rule, at least 5 years of data... reliable.); (2) The assumptions used in the agency's actuarial analysis are justified; (3) When all...
Reliability Generalization (RG) Analysis: The Test Is Not Reliable
ERIC Educational Resources Information Center
Warne, Russell
2008-01-01
Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…
ERIC Educational Resources Information Center
Setzer, J. Carl; He, Yi
2009-01-01
Reliability Analysis for the Internationally Administered 2002 Series GED (General Educational Development) Tests Reliability refers to the consistency, or stability, of test scores when the authors administer the measurement procedure repeatedly to groups of examinees (American Educational Research Association [AERA], American Psychological…
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Meta-Analysis of Coefficient Alpha
ERIC Educational Resources Information Center
Rodriguez, Michael C.; Maeda, Yukiko
2006-01-01
The meta-analysis of coefficient alpha across many studies is becoming more common in psychology by a methodology labeled reliability generalization. Existing reliability generalization studies have not used the sampling distribution of coefficient alpha for precision weighting and other common meta-analytic procedures. A framework is provided for…
1992-04-01
contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more
Southern forest inventory and analysis volume equation user’s guide
Christopher M. Oswalt; Roger C. Conner
2011-01-01
Reliable volume estimation procedures are fundamental to the mission of the Forest Inventory and Analysis (FIA) program. Moreover, public access to FIA program procedures is imperative. Here we present the volume estimation procedures used by the southern FIA program of the U.S. Department of Agriculture Forest Service Southern Research Station. The guide presented...
Retest Reliability of the Rosenzweig Picture-Frustration Study and Similar Semiprojective Techniques
ERIC Educational Resources Information Center
Rosenzweig, Saul; And Others
1975-01-01
The research dealing with the reliability of the Rosenzweig Picture-Frustration Study is surveyed. Analysis of various split-half, and retest procedures are reviewed and their relative effectiveness evaluated. Reliability measures as applied to projective techniques in general are discussed. (Author/DEP)
Validity and Reliability of the School Physical Activity Environment Questionnaire
ERIC Educational Resources Information Center
Martin, Jeffrey J.; McCaughtry, Nate; Flory, Sara; Murphy, Anne; Wisdom, Kimberlydawn
2011-01-01
The goal of the current study was to establish the factor validity of the Questionnaire Assessing School Physical Activity Environment (Robertson-Wilson, Levesque, & Holden, 2007) using confirmatory factor analysis procedures. Another goal was to establish internal reliability and test-retest reliability. The confirmatory factor analysis…
Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.
2015-01-01
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349
Orbiter Autoland reliability analysis
NASA Technical Reports Server (NTRS)
Welch, D. Phillip
1993-01-01
The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.
NASA trend analysis procedures
NASA Technical Reports Server (NTRS)
1993-01-01
This publication is primarily intended for use by NASA personnel engaged in managing or implementing trend analysis programs. 'Trend analysis' refers to the observation of current activity in the context of the past in order to infer the expected level of future activity. NASA trend analysis was divided into 5 categories: problem, performance, supportability, programmatic, and reliability. Problem trend analysis uncovers multiple occurrences of historical hardware or software problems or failures in order to focus future corrective action. Performance trend analysis observes changing levels of real-time or historical flight vehicle performance parameters such as temperatures, pressures, and flow rates as compared to specification or 'safe' limits. Supportability trend analysis assesses the adequacy of the spaceflight logistics system; example indicators are repair-turn-around time and parts stockage levels. Programmatic trend analysis uses quantitative indicators to evaluate the 'health' of NASA programs of all types. Finally, reliability trend analysis attempts to evaluate the growth of system reliability based on a decreasing rate of occurrence of hardware problems over time. Procedures for conducting all five types of trend analysis are provided in this publication, prepared through the joint efforts of the NASA Trend Analysis Working Group.
Application of the differential decay-curve method to γ-γ fast-timing lifetime measurements
NASA Astrophysics Data System (ADS)
Petkov, P.; Régis, J.-M.; Dewald, A.; Kisyov, S.
2016-10-01
A new procedure for the analysis of delayed-coincidence lifetime experiments focused on the Fast-timing case is proposed following the approach of the Differential decay-curve method. Examples of application of the procedure on experimental data reveal its reliability for lifetimes even in the sub-nanosecond range. The procedure is expected to improve both precision/reliability and treatment of systematic errors and scarce data as well as to provide an option for cross-check with the results obtained by means of other analyzing methods.
A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Edmonston, Leon P.; Randall, Robert S.
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
Reliable and valid assessment of Lichtenstein hernia repair skills.
Carlsen, C G; Lindorff-Larsen, K; Funch-Jensen, P; Lund, L; Charles, P; Konge, L
2014-08-01
Lichtenstein hernia repair is a common surgical procedure and one of the first procedures performed by a surgical trainee. However, formal assessment tools developed for this procedure are few and sparsely validated. The aim of this study was to determine the reliability and validity of an assessment tool designed to measure surgical skills in Lichtenstein hernia repair. Key issues were identified through a focus group interview. On this basis, an assessment tool with eight items was designed. Ten surgeons and surgical trainees were video recorded while performing Lichtenstein hernia repair, (four experts, three intermediates, and three novices). The videos were blindly and individually assessed by three raters (surgical consultants) using the assessment tool. Based on these assessments, validity and reliability were explored. The internal consistency of the items was high (Cronbach's alpha = 0.97). The inter-rater reliability was very good with an intra-class correlation coefficient (ICC) = 0.93. Generalizability analysis showed a coefficient above 0.8 even with one rater. The coefficient improved to 0.92 if three raters were used. One-way analysis of variance found a significant difference between the three groups which indicates construct validity, p < 0.001. Lichtenstein hernia repair skills can be assessed blindly by a single rater in a reliable and valid fashion with the new procedure-specific assessment tool. We recommend this tool for future assessment of trainees performing Lichtenstein hernia repair to ensure that the objectives of competency-based surgical training are met.
Structural reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.
1991-01-01
For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Reliability models: the influence of model specification in generation expansion planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1982-10-01
This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less
Covariate-free and Covariate-dependent Reliability.
Bentler, Peter M
2016-12-01
Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.
Field reliability of competency and sanity opinions: A systematic review and meta-analysis.
Guarnera, Lucy A; Murrie, Daniel C
2017-06-01
We know surprisingly little about the interrater reliability of forensic psychological opinions, even though courts and other authorities have long called for known error rates for scientific procedures admitted as courtroom testimony. This is particularly true for opinions produced during routine practice in the field, even for some of the most common types of forensic evaluations-evaluations of adjudicative competency and legal sanity. To address this gap, we used meta-analytic procedures and study space methodology to systematically review studies that examined the interrater reliability-particularly the field reliability-of competency and sanity opinions. Of 59 identified studies, 9 addressed the field reliability of competency opinions and 8 addressed the field reliability of sanity opinions. These studies presented a wide range of reliability estimates; pairwise percentage agreements ranged from 57% to 100% and kappas ranged from .28 to 1.0. Meta-analytic combinations of reliability estimates obtained by independent evaluators returned estimates of κ = .49 (95% CI: .40-.58) for competency opinions and κ = .41 (95% CI: .29-.53) for sanity opinions. This wide range of reliability estimates underscores the extent to which different evaluation contexts tend to produce different reliability rates. Unfortunately, our study space analysis illustrates that available field reliability studies typically provide little information about contextual variables crucial to understanding their findings. Given these concerns, we offer suggestions for improving research on the field reliability of competency and sanity opinions, as well as suggestions for improving reliability rates themselves. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Reliability techniques for computer executive programs
NASA Technical Reports Server (NTRS)
1972-01-01
Computer techniques for increasing the stability and reliability of executive and supervisory systems were studied. Program segmentation characteristics are discussed along with a validation system which is designed to retain the natural top down outlook in coding. An analysis of redundancy techniques and roll back procedures is included.
Reliability Analysis of Money Habitudes
ERIC Educational Resources Information Center
Delgadillo, Lucy M.; Bushman, Brittani S.
2015-01-01
Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…
US line-ups outperform UK line-ups
Seale-Carlisle, Travis M.
2016-01-01
In the USA and the UK, many thousands of police suspects are identified by eyewitnesses every year. Unfortunately, many of those suspects are innocent, which becomes evident when they are exonerated by DNA testing, often after having been imprisoned for years. It is, therefore, imperative to use identification procedures that best enable eyewitnesses to discriminate innocent from guilty suspects. Although police investigators in both countries often administer line-up procedures, the details of how line-ups are presented are quite different and an important direct comparison has yet to be conducted. We investigated whether these two line-up procedures differ in terms of (i) discriminability (using receiver operating characteristic analysis) and (ii) reliability (using confidence–accuracy characteristic analysis). A total of 2249 participants watched a video of a crime and were later tested using either a six-person simultaneous photo line-up procedure (USA) or a nine-person sequential video line-up procedure (UK). US line-up procedure yielded significantly higher discriminability and significantly higher reliability. The results do not pinpoint the reason for the observed difference between the two procedures, but they do suggest that there is much room for improvement with the UK line-up. PMID:27703695
Code of Federal Regulations, 2010 CFR
2010-01-01
... assessment, it will seek evidence relevant to the assessment, including an analysis of the military needs of a selected country or countries, technical analysis, and intelligence information from the.... (c) Analysis. BIS will conduct its analysis by evaluating whether the reasonable and reliable...
Christopher M. Oswalt; Adam M. Saunders
2009-01-01
Sound estimation procedures are desideratum for generating credible population estimates to evaluate the status and trends in resource conditions. As such, volume estimation is an integral component of the U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) program's reporting. In effect, reliable volume estimation procedures are...
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Sørensen, Hans Eibe; Slater, Stanley F
2008-08-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.
NASA Technical Reports Server (NTRS)
Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.
2000-01-01
Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.
Interpreting Variance Components as Evidence for Reliability and Validity.
ERIC Educational Resources Information Center
Kane, Michael T.
The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…
Transferable Competences of Young People with a High Dropout Risk in Vocational Training in Germany
ERIC Educational Resources Information Center
Frey, Andreas; Balzer, Lars; Ruppert, Jean-Jacques
2014-01-01
This paper examines whether the subjective beliefs on their competences of 409 trainees in machinery, sales, and logistics constitute a reliable and valid way to measure transferable competences. The analysis of results attributes satisfactory to good reliability values to the assessment procedure. Furthermore, it could be shown that young people…
Accelerated stress testing of terrestrial solar cells
NASA Technical Reports Server (NTRS)
Prince, J. L.; Lathrop, J. W.
1979-01-01
A program to investigate the reliability characteristics of unencapsulated low-cost terrestrial solar cells using accelerated stress testing is described. Reliability (or parametric degradation) factors appropriate to the cell technologies and use conditions were studied and a series of accelerated stress tests was synthesized. An electrical measurement procedure and a data analysis and management system was derived, and stress test fixturing and material flow procedures were set up after consideration was given to the number of cells to be stress tested and measured and the nature of the information to be obtained from the process. Selected results and conclusions are presented.
Surface electrical properties experiment study phase, volume 3
NASA Technical Reports Server (NTRS)
1973-01-01
The reliability and quality assurance system and procedures used in developing test equipment for the Lunar Experiment projects are described. The subjects discussed include the following: (1) documentation control, (2) design review, (3) parts and materials selection, (4) material procurement, (5) inspection procedures, (6) qualification and special testing, and failure modes and effects analysis.
Reliability of sensor-based real-time workflow recognition in laparoscopic cholecystectomy.
Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Reiser, Silvano; Vogel, Thomas; Wilhelm, Dirk; Feussner, Hubertus
2014-11-01
Laparoscopic cholecystectomy is a very common minimally invasive surgical procedure that may be improved by autonomous or cooperative assistance support systems. Model-based surgery with a precise definition of distinct procedural tasks (PT) of the operation was implemented and tested to depict and analyze the process of this procedure. Reliability of real-time workflow recognition in laparoscopic cholecystectomy ([Formula: see text] cases) was evaluated by continuous sensor-based data acquisition. Ten PTs were defined including begin/end preparation calots' triangle, clipping/cutting cystic artery and duct, begin/end gallbladder dissection, begin/end hemostasis, gallbladder removal, and end of operation. Data acquisition was achieved with continuous instrument detection, room/table light status, intra-abdominal pressure, table tilt, irrigation/aspiration volume and coagulation/cutting current application. Two independent observers recorded start and endpoint of each step by analysis of the sensor data. The data were cross-checked with laparoscopic video recordings serving as gold standard for PT identification. Bland-Altman analysis revealed for 95% of cases a difference of annotation results within the limits of agreement ranging from [Formula: see text]309 s (PT 7) to +368 s (PT 5). Laparoscopic video and sensor data matched to a greater or lesser extent within the different procedural tasks. In the majority of cases, the observer results exceeded those obtained from the laparoscopic video. Empirical knowledge was required to detect phase transit. A set of sensors used to monitor laparoscopic cholecystectomy procedures was sufficient to enable expert observers to reliably identify each PT. In the future, computer systems may automate the task identification process provided a more robust data inflow is available.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
Highly reliable oxide VCSELs for datacom applications
NASA Astrophysics Data System (ADS)
Aeby, Ian; Collins, Doug; Gibson, Brian; Helms, Christopher J.; Hou, Hong Q.; Lou, Wenlin; Bossert, David J.; Wang, Charlie X.
2003-06-01
In this paper we describe the processes and procedures that have been developed to ensure high reliability for Emcore"s 850 nm oxide confined GaAs VCSELs. Evidence from on-going accelerated life testing and other reliability studies that confirm that this process yields reliable products will be discussed. We will present data and analysis techniques used to determine the activation energy and acceleration factors for the dominant wear-out failure mechanisms for our devices as well as our estimated MTTF of greater than 2 million use hours. We conclude with a summary of internal verification and field return rate validation data.
NASA Astrophysics Data System (ADS)
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
Di Berardino, F; Tognola, G; Paglialonga, A; Alpini, D; Grandori, F; Cesarani, A
2010-08-01
To assess whether different compact disk recording protocols, used to prepare speech test material, affect the reliability and comparability of speech audiometry testing. We conducted acoustic analysis of compact disks used in clinical practice, to determine whether speech material had been recorded using similar procedures. To assess the impact of different recording procedures on speech test outcomes, normal hearing subjects were tested using differently prepared compact disks, and their psychometric curves compared. Acoustic analysis revealed that speech material had been recorded using different protocols. The major difference was the gain between the levels at which the speech material and the calibration signal had been recorded. Although correct calibration of the audiometer was performed for each compact disk before testing, speech recognition thresholds and maximum intelligibility thresholds differed significantly between compact disks (p < 0.05), and were influenced by the gain between the recording level of the speech material and the calibration signal. To ensure the reliability and comparability of speech test outcomes obtained using different compact disks, it is recommended to check for possible differences in the recording gains used to prepare the compact disks, and then to compensate for any differences before testing.
An Independent Evaluation of the FMEA/CIL Hazard Analysis Alternative Study
NASA Technical Reports Server (NTRS)
Ray, Paul S.
1996-01-01
The present instruments of safety and reliability risk control for a majority of the National Aeronautics and Space Administration (NASA) programs/projects consist of Failure Mode and Effects Analysis (FMEA), Hazard Analysis (HA), Critical Items List (CIL), and Hazard Report (HR). This extensive analytical approach was introduced in the early 1970's and was implemented for the Space Shuttle Program by NHB 5300.4 (1D-2. Since the Challenger accident in 1986, the process has been expanded considerably and resulted in introduction of similar and/or duplicated activities in the safety/reliability risk analysis. A study initiated in 1995, to search for an alternative to the current FMEA/CIL Hazard Analysis methodology generated a proposed method on April 30, 1996. The objective of this Summer Faculty Study was to participate in and conduct an independent evaluation of the proposed alternative to simplify the present safety and reliability risk control procedure.
Determination of Phenols and Trimethylamine in Industrial Effluents
NASA Technical Reports Server (NTRS)
Levaggi, D. A.; Feldstein, M.
1971-01-01
For regulatory purposes to control certain odorous compounds the analysis of phenols and trimethylamines in industrial effluents is necessary. The Bay Area Air Pollution Control District laboratory has been determining these gases by gas chromatographic techniques. The procedures for sample collection, preparation for analysis and determination are described in detail. Typical data from various sources showing the effect of proposed regulations is shown. Extensive sampling and usage of these procedures has shown them to be accurate, reliable and suitable to all types of source effluents.
Application and Evaluation of an Expert Judgment Elicitation Procedure for Correlations.
Zondervan-Zwijnenburg, Mariëlle; van de Schoot-Hubeek, Wenneke; Lek, Kimberley; Hoijtink, Herbert; van de Schoot, Rens
2017-01-01
The purpose of the current study was to apply and evaluate a procedure to elicit expert judgments about correlations, and to update this information with empirical data. The result is a face-to-face group elicitation procedure with as its central element a trial roulette question that elicits experts' judgments expressed as distributions. During the elicitation procedure, a concordance probability question was used to provide feedback to the experts on their judgments. We evaluated the elicitation procedure in terms of validity and reliability by means of an application with a small sample of experts. Validity means that the elicited distributions accurately represent the experts' judgments. Reliability concerns the consistency of the elicited judgments over time. Four behavioral scientists provided their judgments with respect to the correlation between cognitive potential and academic performance for two separate populations enrolled at a specific school in the Netherlands that provides special education to youth with severe behavioral problems: youth with autism spectrum disorder (ASD), and youth with diagnoses other than ASD. Measures of face-validity, feasibility, convergent validity, coherence, and intra-rater reliability showed promising results. Furthermore, the current study illustrates the use of the elicitation procedure and elicited distributions in a social science application. The elicited distributions were used as a prior for the correlation, and updated with data for both populations collected at the school of interest. The current study shows that the newly developed elicitation procedure combining the trial roulette method with the elicitation of correlations is a promising tool, and that the results of the procedure are useful as prior information in a Bayesian analysis.
Reliability considerations for the total strain range version of strainrange partitioning
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y. T.
1984-01-01
A proposed total strainrange version of strainrange partitioning (SRP) to enhance the manner in which SRP is applied to life prediction is considered with emphasis on how advanced reliability technology can be applied to perform risk analysis and to derive safety check expressions. Uncertainties existing in the design factors associated with life prediction of a component which experiences the combined effects of creep and fatigue can be identified. Examples illustrate how reliability analyses of such a component can be performed when all design factors in the SRP model are random variables reflecting these uncertainties. The Rackwitz-Fiessler and Wu algorithms are used and estimates of the safety index and the probablity of failure are demonstrated for a SRP problem. Methods of analysis of creep-fatigue data with emphasis on procedures for producing synoptic statistics are presented. An attempt to demonstrate the importance of the contribution of the uncertainties associated with small sample sizes (fatique data) to risk estimates is discussed. The procedure for deriving a safety check expression for possible use in a design criteria document is presented.
Reliability of segmental accelerations measured using a new wireless gait analysis system.
Kavanagh, Justin J; Morrison, Steven; James, Daniel A; Barrett, Rod
2006-01-01
The purpose of this study was to determine the inter- and intra-examiner reliability, and stride-to-stride reliability, of an accelerometer-based gait analysis system which measured 3D accelerations of the upper and lower body during self-selected slow, preferred and fast walking speeds. Eight subjects attended two testing sessions in which accelerometers were attached to the head, neck, lower trunk, and right shank. In the initial testing session, two different examiners attached the accelerometers and performed the same testing procedures. A single examiner repeated the procedure in a subsequent testing session. All data were collected using a new wireless gait analysis system, which features near real-time data transmission via a Bluetooth network. Reliability for each testing condition (4 locations, 3 directions, 3 speeds) was quantified using a waveform similarity statistic known as the coefficient of multiple determination (CMD). CMD's ranged from 0.60 to 0.98 across all test conditions and were not significantly different for inter-examiner (0.86), intra-examiner (0.87), and stride-to-stride reliability (0.86). The highest repeatability for the effect of location, direction and walking speed were for the shank segment (0.94), the vertical direction (0.91) and the fast walking speed (0.91), respectively. Overall, these results indicate that a high degree of waveform repeatability was obtained using a new gait system under test-retest conditions involving single and dual examiners. Furthermore, differences in acceleration waveform repeatability associated with the reapplication of accelerometers were small in relation to normal motor variability.
Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta
2018-05-15
Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The validity of data in science depends on the reliability of the collection procedure and the methods used for data analysis. This becomes a major challenge in studies of mucosal immunology since samples for analysis come from various anatomical sources which differ remarkably in their content and ...
Human reliability in petrochemical industry: an action research.
Silva, João Alexandre Pinheiro; Camarotto, João Alberto
2012-01-01
This paper aims to identify conflicts and gaps between the operators' strategies and actions and the organizational managerial approach for human reliability. In order to achieve these goals, the research approach adopted encompasses literature review, mixing action research methodology and Ergonomic Workplace Analysis in field research. The result suggests that the studied company has a classical and mechanistic point of view focusing on error identification and building barriers through procedures, checklists and other prescription alternatives to improve performance in reliability area. However, it was evident the fundamental role of the worker as an agent of maintenance and construction of system reliability during the action research cycle.
A Z-number-based decision making procedure with ranking fuzzy numbers method
NASA Astrophysics Data System (ADS)
Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah
2014-12-01
The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program General Provisions § 712.1 Purpose. This part establishes the policies and procedures for a Human Reliability Program... judgment and reliability may be impaired by physical or mental/personality disorders, alcohol abuse, use of...
77 FR 39691 - Commission Information Collection Activities (FERC-725); Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
... information collection FERC-725, Certification of Electric Reliability Organization; Procedures for Electric Reliability Standards, to the Office of Management and Budget (OMB) for review of the information collection..., Certification of Electric Reliability Organization; Procedures for Electric Reliability Standards. OMB Control...
Reliability of hospital cost profiles in inpatient surgery.
Grenda, Tyler R; Krell, Robert W; Dimick, Justin B
2016-02-01
With increased policy emphasis on shifting risk from payers to providers through mechanisms such as bundled payments and accountable care organizations, hospitals are increasingly in need of metrics to understand their costs relative to peers. However, it is unclear whether Medicare payments for surgery can reliably compare hospital costs. We used national Medicare data to assess patients undergoing colectomy, pancreatectomy, and open incisional hernia repair from 2009 to 2010 (n = 339,882 patients). We first calculated risk-adjusted hospital total episode payments for each procedure. We then used hierarchical modeling techniques to estimate the reliability of total episode payments for each procedure and explored the impact of hospital caseload on payment reliability. Finally, we quantified the number of hospitals meeting published reliability benchmarks. Mean risk-adjusted total episode payments ranged from $13,262 (standard deviation [SD] $14,523) for incisional hernia repair to $25,055 (SD $22,549) for pancreatectomy. The reliability of hospital episode payments varied widely across procedures and depended on sample size. For example, mean episode payment reliability for colectomy (mean caseload, 157) was 0.80 (SD 0.18), whereas for pancreatectomy (mean caseload, 13) the mean reliability was 0.45 (SD 0.27). Many hospitals met published reliability benchmarks for each procedure. For example, 90% of hospitals met reliability benchmarks for colectomy, 40% for pancreatectomy, and 66% for incisional hernia repair. Episode payments for inpatient surgery are a reliable measure of hospital costs for commonly performed procedures, but are less reliable for lower volume operations. These findings suggest that hospital cost profiles based on Medicare claims data may be used to benchmark efficiency, especially for more common procedures. Copyright © 2016 Elsevier Inc. All rights reserved.
Michels, Nele R M; Driessen, Erik W; Muijtjens, Arno M M; Van Gaal, Luc F; Bossaert, Leo L; De Winter, Benedicte Y
2009-12-01
A portfolio is used to mentor and assess students' clinical performance at the workplace. However, students and raters often perceive the portfolio as a time-consuming instrument. In this study, we investigated whether assessment during medical internship by a portfolio can combine reliability and feasibility. The domain-oriented reliability of 61 double-rated portfolios was measured, using a generalisability analysis with portfolio tasks and raters as sources of variation in measuring the performance of a student. We obtained reliability (Phi coefficient) of 0.87 with this internship portfolio containing 15 double-rated tasks. The generalisability analysis showed that an acceptable level of reliability (Phi = 0.80) was maintained when the amount of portfolio tasks was decreased to 13 or 9 using one and two raters, respectively. Our study shows that a portfolio can be a reliable method for the assessment of workplace learning. The possibility of reducing the amount of tasks or raters while maintaining a sufficient level of reliability suggests an increase in feasibility of portfolio use for both students and raters.
The Application of a Residual Risk Evaluation Technique Used for Expendable Launch Vehicles
NASA Technical Reports Server (NTRS)
Latimer, John A.
2009-01-01
This presentation provides a Residual Risk Evaluation Technique (RRET) developed by Kennedy Space Center (KSC) Safety and Mission Assurance (S&MA) Launch Services Division. This technique is one of many procedures used by S&MA at KSC to evaluate residual risks for each Expendable Launch Vehicle (ELV) mission. RRET is a straight forward technique that incorporates the proven methodology of risk management, fault tree analysis, and reliability prediction. RRET derives a system reliability impact indicator from the system baseline reliability and the system residual risk reliability values. The system reliability impact indicator provides a quantitative measure of the reduction in the system baseline reliability due to the identified residual risks associated with the designated ELV mission. An example is discussed to provide insight into the application of RRET.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
El-Housseiny, Azza A; Alsadat, Farah A; Alamoudi, Najlaa M; El Derwi, Douaa A; Farsi, Najat M; Attar, Moaz H; Andijani, Basil M
2016-04-14
Early recognition of dental fear is essential for the effective delivery of dental care. This study aimed to test the reliability and validity of the Arabic version of the Children's Fear Survey Schedule-Dental Subscale (CFSS-DS). A school-based sample of 1546 children was randomly recruited. The Arabic version of the CFSS-DS was completed by children during class time. The scale was tested for internal consistency and test-retest reliability. To test criterion validity, children's behavior was assessed using the Frankl scale during dental examination, and results were compared with children's CFSS-DS scores. To test the scale's construct validity, scores on "fear of going to the dentist soon" were correlated with CFSS-DS scores. Factor analysis was also used. The Arabic version of the CFSS-DS showed high reliability regarding both test-retest reliability (intraclass correlation = 0.83, p < 0.001) and internal consistency (Cronbach's α = 0.88). It showed good criterion validity: children with negative behavior had significantly higher fear scores (t = 13.67, p < 0.001). It also showed moderate construct validity (Spearman's rho correlation, r = 0.53, p < 0.001). Factor analysis identified the following factors: "fear of invasive dental procedures," "fear of less invasive dental procedures" and "fear of strangers." The Arabic version of the CFSS-DS is a reliable and valid measure of dental fear in Arabic-speaking children. Pediatric dentists and researchers may use this validated version of the CFSS-DS to measure dental fear in Arabic-speaking children.
Muehrer, Rebecca J; Lanuza, Dorothy M; Brown, Roger L; Djamali, Arjang
2015-01-01
This study describes the development and psychometric testing of the Sexual Concerns Questionnaire (SCQ) in kidney transplant (KTx) recipients. Construct validity was assessed using the Kroonenberg and Lewis exploratory/confirmatory procedure and testing hypothesized relationships with established questionnaires. Configural and weak invariance were examined across gender, dialysis history, relationship status, and transplant type. Reliability was assessed with Cronbach's alpha, composite reliability, and test-retest reliability. Factor analysis resulted in a 7-factor solution and suggests good model fit. Construct validity was also supported by the tests of hypothesized relationships. Configural and weak invariance were supported for all subgroups. Reliability of the SCQ was also supported. Findings indicate the SCQ is a valid and reliable measure of KTx recipients' sexual concerns.
Reliability and validity of the Japanese version of the Organizational Justice Questionnaire.
Inoue, Akiomi; Kawakami, Norito; Tsutsumi, Akizumi; Shimazu, Akihito; Tsuchiya, Masao; Ishizaki, Masao; Tabata, Masaji; Akiyama, Miki; Kitazume, Akiko; Kuroda, Mitsuyo; Kivimäki, Mika
2009-01-01
Previous European studies reporting low procedural justice and low interactional justice were associated with increased health problems have used a modified version of the Moorman's Organizational Justice Questionnaire (OJQ, Elovainio et al., 2002) to assess organizational justice. We translated the modified OJQ into the Japanese language and examined the internal consistency reliability, and factor-based and construct validity of this measure. A back-translation procedure confirmed that the translation was appropriate, pending a minor revision. A total of 185 men and 58 women at a manufacturing factory in Japan were surveyed using a mailed questionnaire including the OJQ and other job stressors. Cronbach alpha coefficients of the two OJQ subscales were high (0.85-0.94) for both sexes. The hypothesized two factors (i.e., procedural justice and interactional justice) were extracted by the factor analysis for men; for women, procedural justice was further split into two separate dimensions supporting a three- rather than two-factor structure. Convergent validity was supported by expected correlations of the OJQ with job control, supervisor support, effort-reward imbalance, and job future ambiguity in particular among the men. The present study shows that the Japanese version of the OJQ has acceptable levels of reliability and validity at least for male employees.
FEMA and RAM Analysis for the Multi Canister Overpack (MCO) Handling Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
SWENSON, C.E.
2000-06-01
The Failure Modes and Effects Analysis and the Reliability, Availability, and Maintainability Analysis performed for the Multi-Canister Overpack Handling Machine (MHM) has shown that the current design provides for a safe system, but the reliability of the system (primarily due to the complexity of the interlocks and permissive controls) is relatively low. No specific failure modes were identified where significant consequences to the public occurred, or where significant impact to nearby workers should be expected. The overall reliability calculation for the MHM shows a 98.1 percent probability of operating for eight hours without failure, and an availability of the MHMmore » of 90 percent. The majority of the reliability issues are found in the interlocks and controls. The availability of appropriate spare parts and maintenance personnel, coupled with well written operating procedures, will play a more important role in successful mission completion for the MHM than other less complicated systems.« less
Reliability Growth Testing Effectiveness.
1984-01-01
interface boundaries. f. Test facility and equipment descriptions and requirements. g. Procedures and timing for corrective acticns. Uh . Blocks of tme and...apporLionment, FMEA and stress analysis. Instead, reiiability "growth management provides a means of viewing all the reliability program activities in an...A VENDOR SURVEILLANCE 8 A A R GROWTH ESTING. A R PROGRAM4 C B A A NORMALIZED INCREASE 0 2.5% 25% 60 % IN ACQUISITION COST RELATIVE CHANGE IN 1:1 4:1
Lou, Yanni; Lu, Linghui; Li, Yuan; Liu, Meng; Bredle, Jason M; Jia, Liqun
2015-10-01
The study objective was to determine the reliability and validity of the Chinese version of the Functional Assessment of Chronic Illness Therapy - Ascites Index (FACIT-AI). A forward-backward translation procedure was adopted to develop the Chinese version of the FACIT-AI, which was tested in 69 patients with malignant ascites. Cronbach's α, split-half reliability, and test-retest reliability were used to assess the reliability of the scale. The content validity index was used to assess the content validity, while factor analysis was used for construct validity and correlation analysis was used for criterion validity. The Cronbach's α was 0.772 for the total scale, and the split-half reliability was 0.693. The test-retest correlation was 0.972. The content validity index for the scale was 0.8-1.0. Four factors were extracted by factor analysis, and these contributed 63.51% of the total variance. Item-total correlations ranged from 0.591 to 0.897, and these were correlated with visual analog scale scores (correlation coefficient, 0.889; P<0.01). The Chinese version of the FACIT-AI has good reliability and validity and can be used as a tool to measure quality of life in Chinese patients with malignant ascites.
Critical evaluation of sample pretreatment techniques.
Hyötyläinen, Tuulia
2009-06-01
Sample preparation before chromatographic separation is the most time-consuming and error-prone part of the analytical procedure. Therefore, selecting and optimizing an appropriate sample preparation scheme is a key factor in the final success of the analysis, and the judicious choice of an appropriate procedure greatly influences the reliability and accuracy of a given analysis. The main objective of this review is to critically evaluate the applicability, disadvantages, and advantages of various sample preparation techniques. Particular emphasis is placed on extraction techniques suitable for both liquid and solid samples.
A guide to onboard checkout. Volume 2: Environmental control and life support
NASA Technical Reports Server (NTRS)
1971-01-01
A description of space station equipment for environmental control and life support is presented. Reliability and maintenance procedures are reviewed. Failure analysis and checkout tests are discussed. The strategy for software checkout is noted.
Doble, Brett; Wordsworth, Sarah; Rogers, Chris A; Welbourn, Richard; Byrne, James; Blazeby, Jane M
2017-08-01
This review aims to evaluate the current literature on the procedural costs of bariatric surgery for the treatment of severe obesity. Using a published framework for the conduct of micro-costing studies for surgical interventions, existing cost estimates from the literature are assessed for their accuracy, reliability and comprehensiveness based on their consideration of seven 'important' cost components. MEDLINE, PubMed, key journals and reference lists of included studies were searched up to January 2017. Eligible studies had to report per-case, total procedural costs for any type of bariatric surgery broken down into two or more individual cost components. A total of 998 citations were screened, of which 13 studies were included for analysis. Included studies were mainly conducted from a US hospital perspective, assessed either gastric bypass or adjustable gastric banding procedures and considered a range of different cost components. The mean total procedural costs for all included studies was US$14,389 (range, US$7423 to US$33,541). No study considered all of the recommended 'important' cost components and estimation methods were poorly reported. The accuracy, reliability and comprehensiveness of the existing cost estimates are, therefore, questionable. There is a need for a comparative cost analysis of the different approaches to bariatric surgery, with the most appropriate costing approach identified to be micro-costing methods. Such an analysis will not only be useful in estimating the relative cost-effectiveness of different surgeries but will also ensure appropriate reimbursement and budgeting by healthcare payers to ensure barriers to access this effective treatment by severely obese patients are minimised.
ERIC Educational Resources Information Center
Delaney, Michael F.; And Others
1985-01-01
Describes a simple and reliable new quantitative analysis experiment using liquid chromatography for the determinaiton of caffeine, saccharin, and sodium benzoate in beverages. Background information, procedures used, and typical results obtained are provided. (JN)
The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks
NASA Technical Reports Server (NTRS)
Hamlin, Teri L.
2010-01-01
HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.
Exploring the Dimensionality of a Brief School Readiness Screener for Use with Latino/a Children
ERIC Educational Resources Information Center
Quirk, Matthew; Rebelez, Jennica; Furlong, Michael
2014-01-01
This study contributed to the school readiness literature by examining the factor structure and reliability of a revised version of the Kindergarten Student Entrance Profile (KSEP). Teachers rated 579 Latino/a children during the first month of kindergarten using the KSEP. Factor analysis procedures (exploratory factor analysis [EFA] and…
Methodological Choices in the Content Analysis of Textbooks for Measuring Alignment with Standards
ERIC Educational Resources Information Center
Polikoff, Morgan S.; Zhou, Nan; Campbell, Shauna E.
2015-01-01
With the recent adoption of the Common Core standards in many states, there is a need for quality information about textbook alignment to standards. While there are many existing content analysis procedures, these generally have little, if any, validity or reliability evidence. One exception is the Surveys of Enacted Curriculum (SEC), which has…
Lei, Pingguang; Lei, Guanghe; Tian, Jianjun; Zhou, Zengfen; Zhao, Miao; Wan, Chonghua
2014-10-01
This paper is aimed to develop the irritable bowel syndrome (IBS) scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-IBS) by the modular approach and validate it by both classical test theory and generalizability theory. The QLICD-IBS was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, and quantitative statistical procedures. One hundred twelve inpatients with IBS were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability, and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t tests and also G studies and D studies of generalizability theory analysis. Multi-trait scaling analysis, correlation, and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. Test-retest reliability coefficients (Pearson r and intra-class correlation (ICC)) for the overall score and all domains were higher than 0.80; the internal consistency α for all domains at two measurements were higher than 0.70 except for the social domain (0.55 and 0.67, respectively). The overall score and scores for all domains/facets had statistically significant changes after treatments with moderate or higher effect size standardized response mean (SRM) ranging from 0.72 to 1.02 at domain levels. G coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-IBS has good validity, reliability, responsiveness, and some highlights and can be used as the quality of life instrument for patients with IBS.
Wan, Chonghua; Li, Hezhan; Fan, Xuejin; Yang, Ruixue; Pan, Jiahua; Chen, Wenru; Zhao, Rong
2014-06-04
Quality of life (QOL) for patients with coronary heart disease (CHD) is now concerned worldwide with the specific instruments being seldom and no one developed by the modular approach. This paper is aimed to develop the CHD scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-CHD) by the modular approach and validate it by both classical test theory and Generalizability Theory. The QLICD-CHD was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, pre-testing and quantitative statistical procedures. 146 inpatients with CHD were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t-tests and also G studies and D studies of Genralizability Theory analysis. Multi-trait scaling analysis, correlation and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. The internal consistency α and test-retest reliability coefficients (Pearson r and Intra-class correlations ICC) for the overall instrument and all domains were higher than 0.70 and 0.80 respectively; The overall and all domains except for social domain had statistically significant changes after treatments with moderate effect size SRM (standardized response mea) ranging from 0.32 to 0.67. G-coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-CHD has good validity, reliability, and moderate responsiveness and some highlights, and can be used as the quality of life instrument for patients with CHD. However, in order to obtain better reliability, the numbers of items for social domain should be increased or the items' quality, not quantity, should be improved.
Optimization of life support systems and their systems reliability
NASA Technical Reports Server (NTRS)
Fan, L. T.; Hwang, C. L.; Erickson, L. E.
1971-01-01
The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.
Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J.
2013-01-01
Purpose/Aim: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity‐based bubble inclinometer and iPhone® application. Materials/Methods: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo‐pelvic flexion, isolated lumbar flexion, total thoracolumbo‐pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. Results: The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. Conclusions: The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. Level of Evidence: 2b (Observational study of reliability) PMID:23593551
Kolber, Morey J; Pizzini, Matias; Robinson, Ashley; Yanez, Dania; Hanney, William J
2013-04-01
PURPOSEAIM: This purpose of this study was to investigate the reliability, minimal detectable change (MDC), and concurrent validity of active spinal mobility measurements using a gravity-based bubble inclinometer and iPhone® application. MATERIALSMETHODS: Two investigators each used a bubble inclinometer and an iPhone® with inclinometer application to measure total thoracolumbo-pelvic flexion, isolated lumbar flexion, total thoracolumbo-pelvic extension, and thoracolumbar lateral flexion in 30 asymptomatic participants using a blinded repeated measures design. The procedures used in this investigation for measuring spinal mobility yielded good intrarater and interrater reliability with Intraclass Correlation Coefficients (ICC) for bubble inclinometry ≥ 0.81 and the iPhone® ≥ 0.80. The MDC90 for the interrater analysis ranged from 4° to 9°. The concurrent validity between bubble inclinometry and the iPhone® application was good with ICC values of ≥ 0.86. The 95% level of agreement indicates that although these measuring instruments are equivalent individual differences of up to 18° may exist when using these devices interchangeably. The bubble inclinometer and iPhone® possess good intrarater and interrater reliability as well as concurrent validity when strict measurement procedures are adhered to. This study provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying spinal mobility. Clinicians should be aware of the potential disagreement when using these devices interchangeably. 2b (Observational study of reliability).
Accuracy of remotely sensed data: Sampling and analysis procedures
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Oderwald, R. G.; Mead, R. A.
1982-01-01
A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
Relating design and environmental variables to reliability
NASA Astrophysics Data System (ADS)
Kolarik, William J.; Landers, Thomas L.
The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.
Method for the Study of Category III Airborne Procedure Reliability
DOT National Transportation Integrated Search
1973-03-01
A method for the study of Category 3 airborne-procedure reliability is presented. The method, based on PERT concepts, is considered to have utility at the outset of a procedure-design cycle and during the early accumulation of actual performance data...
NASA Technical Reports Server (NTRS)
Sobel, Larry; Buttitta, Claudio; Suarez, James
1993-01-01
Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.
ERIC Educational Resources Information Center
Beltyukova, Svetlana A.; Stone, Gregory M.; Ellis, Lee W.
2008-01-01
Purpose: Speech intelligibility research typically relies on traditional evidence of reliability and validity. This investigation used Rasch analysis to enhance understanding of the functioning and meaning of scores obtained with 2 commonly used procedures: word identification (WI) and magnitude estimation scaling (MES). Method: Narrative samples…
Preliminary development of the adolescent students' basic psychological needs at school scale.
Tian, Lili; Han, Mengmeng; Huebner, E Scott
2014-04-01
The aim of the present study was to develop and provide evidence for the validity of a new measure of adolescent students' psychological need satisfaction at school, using a sample of Chinese students. We conducted four studies with four independent samples (total n = 1872). The first study aimed to develop items for the new instrument and to ascertain its factorial structure using exploratory factor analysis procedures. The second study aimed to examine the instrument's factorial structure using confirmatory factor analysis procedures as well as to assess its internal consistency reliability, convergent and divergent validity. The third study aimed to assess its measurement invariance across gender and age. The fourth study aimed to test its test-retest reliability over time and predictive validity. These preliminary results showed that the new instrument has promising psychometric properties. The potential contributions of the new instrument for future research and educational practices were discussed. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
NDE detectability of fatigue type cracks in high strength alloys
NASA Technical Reports Server (NTRS)
Christner, B. K.; Rummel, W. D.
1983-01-01
Specimens suitable for investigating the reliability of production nondestructive evaluation (NDE) to detect tightly closed fatigue cracks in high strength alloys representative of those materials used in spacecraft engine/booster construction were produced. Inconel 718 was selected as representative of nickel base alloys and Haynes 188 was selected as representative of cobalt base alloys used in this application. Cleaning procedures were developed to insure the reusability of the test specimens and a flaw detection reliability assessment of the fluorescent penetrant inspection method was performed using the test specimens produced to characterize their use for future reliability assessments and to provide additional NDE flaw detection reliability data for high strength alloys. The statistical analysis of the fluorescent penetrant inspection data was performed to determine the detection reliabilities for each inspection at a 90% probability/95% confidence level.
Peterson, Jennifer R.; Hill, Catherine C.; Kirkpatrick, Kimberly
2016-01-01
Impulsive choice is typically measured by presenting smaller-sooner (SS) versus larger-later (LL) rewards, with biases towards the SS indicating impulsivity. The current study tested rats on different impulsive choice procedures with LL delay manipulations to assess same-form and alternate-form test-retest reliability. In the systematic-GE procedure (Green & Estle, 2003), the LL delay increased after several sessions of training; in the systematic-ER procedure (Evenden & Ryan, 1996), the delay increased within each session; and in the adjusting-M procedure (Mazur, 1987), the delay changed after each block of trials within a session based on each rat’s choices in the previous block. In addition to measuring choice behavior, we also assessed temporal tracking of the LL delays using the median times of responding during LL trials. The two systematic procedures yielded similar results in both choice and temporal tracking measures following extensive training, whereas the adjusting procedure resulted in relatively more impulsive choices and poorer temporal tracking. Overall, the three procedures produced acceptable same form test-retest reliability over time, but the adjusting procedure did not show significant alternate form test-retest reliability with the other two procedures. The results suggest that systematic procedures may supply better measurements of impulsive choice in rats. PMID:25490901
Camera-tracking gaming control device for evaluation of active wrist flexion and extension.
Shefer Eini, Dalit; Ratzon, Navah Z; Rizzo, Albert A; Yeh, Shih-Ching; Lange, Belinda; Yaffe, Batia; Daich, Alexander; Weiss, Patrice L; Kizony, Rachel
Cross sectional. Measuring wrist range of motion (ROM) is an essential procedure in hand therapy clinics. To test the reliability and validity of a dynamic ROM assessment, the Camera Wrist Tracker (CWT). Wrist flexion and extension ROM of 15 patients with distal radius fractures and 15 matched controls were assessed with the CWT and with a universal goniometer. One-way model intraclass correlation coefficient analysis indicated high test-retest reliability for extension (ICC = 0.92) and moderate reliability for flexion (ICC = 0.49). Standard error for extension was 2.45° and for flexion was 4.07°. Repeated-measures analysis revealed a significant main effect for group; ROM was greater in the control group (F[1, 28] = 47.35; P < .001). The concurrent validity of the CWT was partially supported. The results indicate that the CWT may provide highly reliable scores for dynamic wrist extension ROM, and moderately reliable scores for flexion, in people recovering from a distal radius fracture. N/A. Copyright © 2016 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Thermal Adaptation Methods of Urban Plaza Users in Asia's Hot-Humid Regions: A Taiwan Case Study.
Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung
2015-10-27
Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis--Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)--were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung's Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia.
Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N
2016-06-01
There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs). An independence-scaled procedural assessment was created by linking the procedural key steps of the laparoscopic cholecystectomy to an independence scale. Subtitled and blinded videos of a novice, an intermediate and an almost competent trainee, were evaluated with GRSs (OSATS and GOALS) and the independence-scaled procedural assessment by seven surgeons, three senior trainees and six scrub nurses. Participants received a short introduction to the GRSs and independence-scaled procedural assessment before assessment. The validity was estimated with the Friedman and Wilcoxon test and the reliability with the intra-class correlation coefficient (ICC). A questionnaire was used to evaluate user opinion. Independence-scaled procedural assessment and GRS scores improved significantly with surgical experience (OSATS p = 0.001, GOALS p < 0.001, independence-scaled procedural assessment p < 0.001). The ICCs of the OSATS, GOALS and independence-scaled procedural assessment were 0.78, 0.74 and 0.84, respectively, among surgeons. The ICCs increased when the ratings of scrub nurses were added to those of the surgeons. The independence-scaled procedural assessment was not considered more of an administrative burden than the GRSs (p = 0.692). A procedural assessment created by combining procedural key steps to an independence scale is a valid, reliable and acceptable assessment instrument in surgery. In contrast to the GRSs, the reliability of the independence-scaled procedural assessment exceeded the threshold of 0.8, indicating that it can also be used for summative assessment. It furthermore seems that scrub nurses can assess the operative competence of surgical trainees.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Quantitative metabolomics of the thermophilic methylotroph Bacillus methanolicus.
Carnicer, Marc; Vieira, Gilles; Brautaset, Trygve; Portais, Jean-Charles; Heux, Stephanie
2016-06-01
The gram-positive bacterium Bacillus methanolicus MGA3 is a promising candidate for methanol-based biotechnologies. Accurate determination of intracellular metabolites is crucial for engineering this bacteria into an efficient microbial cell factory. Due to the diversity of chemical and cell properties, an experimental protocol validated on B. methanolicus is needed. Here a systematic evaluation of different techniques for establishing a reliable basis for metabolome investigations is presented. Metabolome analysis was focused on metabolites closely linked with B. methanolicus central methanol metabolism. As an alternative to cold solvent based procedures, a solvent-free quenching strategy using stainless steel beads cooled to -20 °C was assessed. The precision, the consistency of the measurements, and the extent of metabolite leakage from quenched cells were evaluated in procedures with and without cell separation. The most accurate and reliable performance was provided by the method without cell separation, as significant metabolite leakage occurred in the procedures based on fast filtration. As a biological test case, the best protocol was used to assess the metabolome of B. methanolicus grown in chemostat on methanol at two different growth rates and its validity was demonstrated. The presented protocol is a first and helpful step towards developing reliable metabolomics data for thermophilic methylotroph B. methanolicus. This will definitely help for designing an efficient methylotrophic cell factory.
2011-01-01
areas. We quantified morphometric features by geometric and fractal analysis of traced lesion boundaries. Although no single parameter can reliably...These include acoustic descriptors (“echogenicity,” “heterogeneity,” “shadowing”) and morphometric descriptors (“area,” “aspect ratio,” “border...quantitative descriptors; some morphometric features (such as border irregularity) also were particularly effective in lesion classification. Our
Modal Analysis for Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less
1975-07-01
I WIWIHIHlipi pqpv<Hi^«^Rii.i ii mmw AD-A016 282 ASSESSING THE REALIBILITY AND VALIDITY OF MULTI-ATTRIBUTE UTILITY PROCEDURES: AN...more complicated and use data from actual experiments. Example 1: Analysis of raters making Importance judgments about attributes. In MAU studies...generaluablllty of JUDGE as contrasted to ÜASC. To do this, we win reanaIyze the data for each syste™ separately. This 1. valid since the initial
[Reliability and validity of the Braden Scale for predicting pressure sore risk].
Boes, C
2000-12-01
For more accurate and objective pressure sore risk assessment various risk assessment tools were developed mainly in the USA and Great Britain. The Braden Scale for Predicting Pressure Sore Risk is one such example. By means of a literature analysis of German and English texts referring to the Braden Scale the scientific control criteria reliability and validity will be traced and consequences for application of the scale in Germany will be demonstrated. Analysis of 4 reliability studies shows an exclusive focus on interrater reliability. Further, even though examination of 19 validity studies occurs in many different settings, such examination is limited to the criteria sensitivity and specificity (accuracy). The range of sensitivity and specificity level is 35-100%. The recommended cut off points rank in the field of 10 to 19 points. The studies prove to be not comparable with each other. Furthermore, distortions in these studies can be found which affect accuracy of the scale. The results of the here presented analysis show an insufficient proof for reliability and validity in the American studies. In Germany, the Braden scale has not yet been tested under scientific criteria. Such testing is needed before using the scale in different German settings. During the course of such testing, construction and study procedures of the American studies can be used as a basis as can the problems be identified in the analysis presented below.
Digging Deeper: Crisis Management in the Coal Industry
ERIC Educational Resources Information Center
Miller, Barbara M.; Horsley, J. Suzanne
2009-01-01
This study explores crisis management/communication practices within the coal industry through the lens of high reliability organization (HRO) concepts and sensemaking theory. In-depth interviews with industry executives and an analysis of an emergency procedures manual were used to provide an exploratory examination of the status of crisis…
ERIC Educational Resources Information Center
National Bureau of Standards (DOC), Washington, DC.
These guidelines provide a handbook for use by federal organizations in structuring physical security and risk management programs for their automatic data processing facilities. This publication discusses security analysis, natural disasters, supporting utilities, system reliability, procedural measures and controls, off-site facilities,…
Computer Simulation of Human Behavior: Assessment of Creativity.
ERIC Educational Resources Information Center
Greene, John F.
The major purpose of this study is to further the development of procedures which minimize current limitations of creativity instruments, thus yielding a reliable and functional means for assessing creativity. Computerized content analysis and multiple regression are employed to simulate the creativity ratings of trained judges. The computerized…
On Quality and Measures in Software Engineering
ERIC Educational Resources Information Center
Bucur, Ion I.
2006-01-01
Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…
An Assertiveness Inventory for Adults
ERIC Educational Resources Information Center
Gay, Melvin L.; And Others
1975-01-01
The Adult Self-Expression Scale is a 48-item, self-report measure of assertiveness designed for use with adults in general. Scale was found to have high test-retest reliability and moderate-to-high construct validity, as established by correlations with Adjective Check List scales and by a discriminant analysis procedure. (Author)
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Assessment of change in dynamic psychotherapy.
Høglend, P; Bøgwald, K P; Amlo, S; Heyerdahl, O; Sørbye, O; Marble, A; Sjaastad, M C; Bentsen, H
2000-01-01
Five scales have been developed to assess changes that are consistent with the therapeutic rationales and procedures of dynamic psychotherapy. Seven raters evaluated 50 patients before and 36 patients again after brief dynamic psychotherapy. A factor analysis indicated that the scales represent a dimension that is discriminable from general symptoms. A summary measure, Dynamic Capacity, was rated with acceptable reliability by a single rater. However, average scores of three raters were needed for good reliability of change ratings. The scales seem to be sufficiently fine-grained to capture statistically and clinically significant changes during brief dynamic psychotherapy.
The verification of LANDSAT data in the geographical analysis of wetlands in west Tennessee
NASA Technical Reports Server (NTRS)
Rehder, J.; Quattrochi, D. A.
1978-01-01
The reliability of LANDSAT imagery as a medium for identifying, delimiting, monitoring, measuring, and mapping wetlands in west Tennessee was assessed to verify LANDSAT as an accurate, efficient cartographic tool that could be employed by a wide range of users to study wetland dynamics. The verification procedure was based on the visual interpretation and measurement of multispectral imagery. The accuracy testing procedure was predicated on surrogate ground truth data gleaned from medium altitude imagery of the wetlands. Fourteen sites or case study areas were selected from individual 9 x 9 inch photo frames on the aerial photography. These sites were then used as data control calibration parameters for assessing the cartography accuracy of the LANDSAT imagery. An analysis of results obtained from the verification tests indicated that 1:250,000 scale LANDSAT data were the most reliable scale of imagery for visually mapping and measuring wetlands using the area grid technique. The mean areal percentage of accuracy was 93.54 percent (real) and 96.93 percent (absolute). As a test of accuracy, the LANDSAT 1:250,000 scale overall wetland measurements were compared with an area cell mensuration of the swamplands from 1:130,000 scale color infrared U-2 aircraft imagery. The comparative totals substantiated the results from the LANDSAT verification procedure.
Doi, Kentaro; Tanaka, Shinsuke; Iida, Hideo; Eto, Hitomi; Kato, Harunosuke; Aoi, Noriyuki; Kuno, Shinichiro; Hirohi, Toshitsugu; Yoshimura, Kotaro
2013-11-01
The heterogeneous stromal vascular fraction (SVF), containing adipose-derived stem/progenitor cells (ASCs), can be easily isolated through enzymatic digestion of aspirated adipose tissue. In clinical settings, however, strict control of technical procedures according to standard operating procedures and validation of cell-processing conditions are required. Therefore, we evaluated the efficiency and reliability of an automated system for SVF isolation from adipose tissue. SVF cells, freshly isolated using the automated procedure, showed comparable number and viability to those from manual isolation. Flow cytometric analysis confirmed an SVF cell composition profile similar to that after manual isolation. In addition, the ASC yield after 1 week in culture was also not significantly different between the two groups. Our clinical study, in which SVF cells isolated with the automated system were transplanted with aspirated fat tissue for soft tissue augmentation/reconstruction in 42 patients, showed satisfactory outcomes with no serious side-effects. Taken together, our results suggested that the automated isolation system is as reliable a method as manual isolation and may also be useful in clinical settings. Automated isolation is expected to enable cell-based clinical trials in small facilities with an aseptic room, without the necessity of a good manufacturing practice-level cell processing area. Copyright © 2012 John Wiley & Sons, Ltd.
Inter-rater reliability of surgical reviews for AREN03B2: a COG renal tumor committee study.
Hamilton, Thomas E; Barnhart, Douglas; Gow, Kenneth; Ferrer, Fernando; Kandel, Jessica; Glick, Richard; Dasgupta, Roshni; Naranjo, Arlene; He, Ying; Gratias, Eric; Geller, James; Mullen, Elizabeth; Ehrlich, Peter
2014-01-01
The Children's Oncology Group (COG) renal tumor study (AREN03B2) requires real-time central review of radiology, pathology, and the surgical procedure to determine appropriate risk-based therapy. The purpose of this study was to determine the inter-rater reliability of the surgical reviews. Of the first 3200 enrolled AREN03B2 patients, a sample of 100 enriched for blood vessel involvement, spill, rupture, and lymph node involvement was selected for analysis. The surgical assessment was then performed independently by two blinded surgical reviewers and compared to the original assessment, which had been completed by another of the committee surgeons. Variables assessed included surgeon-determined local tumor stage, overall disease stage, type of renal procedure performed, presence of tumor rupture, occurrence of intraoperative tumor spill, blood vessel involvement, presence of peritoneal implants, and interpretation of residual disease. Inter-rater reliability was measured using the Fleiss' Kappa statistic two-sided hypothesis tests (Kappa, p-value). Local tumor stage correlated in all 3 reviews except in one case (Kappa=0.9775, p<0.001). Similarly, overall disease stage had excellent correlation (0.9422, p<0.001). There was strong correlation for type of renal procedure (0.8357, p<0.001), presence of tumor rupture (0.6858, p<0.001), intraoperative tumor spill (0.6493, p<0.001), and blood vessel involvement (0.6470, p<0.001). Variables that had lower correlation were determination of the presence of peritoneal implants (0.2753, p<0.001) and interpretation of residual disease status (0.5310, p<0.001). The inter-rater reliability of the surgical review is high based on the great consistency in the 3 independent review results. This analysis provides validation and establishes precedent for real-time central surgical review to determine treatment assignment in a risk-based stratagem for multimodal cancer therapy. © 2014.
The Effect of Guessing on Item Reliability under Answer-Until-Correct Scoring
ERIC Educational Resources Information Center
Kane, Michael; Moloney, James
1978-01-01
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
ERIC Educational Resources Information Center
van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan
2013-01-01
The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…
What Makes AS Marking Reliable? An Experiment with Some Stages from the Standardisation Process
ERIC Educational Resources Information Center
Greatorex, Jackie; Bell, John F.
2008-01-01
It is particularly important that GCSE and A-level marking is valid and reliable as it affects the life chances of many young people in England. Current developments in marking technology are coinciding with potential changes in procedures to ensure valid and reliable marking. In this research the effectiveness of procedures to facilitate the…
A DNA fingerprinting procedure for ultra high-throughput genetic analysis of insects.
Schlipalius, D I; Waldron, J; Carroll, B J; Collins, P J; Ebert, P R
2001-12-01
Existing procedures for the generation of polymorphic DNA markers are not optimal for insect studies in which the organisms are often tiny and background molecular information is often non-existent. We have used a new high throughput DNA marker generation protocol called randomly amplified DNA fingerprints (RAF) to analyse the genetic variability in three separate strains of the stored grain pest, Rhyzopertha dominica. This protocol is quick, robust and reliable even though it requires minimal sample preparation, minute amounts of DNA and no prior molecular analysis of the organism. Arbitrarily selected oligonucleotide primers routinely produced approximately 50 scoreable polymorphic DNA markers, between individuals of three independent field isolates of R. dominica. Multivariate cluster analysis using forty-nine arbitrarily selected polymorphisms generated from a single primer reliably separated individuals into three clades corresponding to their geographical origin. The resulting clades were quite distinct, with an average genetic difference of 37.5 +/- 6.0% between clades and of 21.0 +/- 7.1% between individuals within clades. As a prelude to future gene mapping efforts, we have also assessed the performance of RAF under conditions commonly used in gene mapping. In this analysis, fingerprints from pooled DNA samples accurately and reproducibly reflected RAF profiles obtained from individual DNA samples that had been combined to create the bulked samples.
Dexter, Franklin; Ledolter, Johannes; Hindman, Bradley J
2016-01-01
In this Statistical Grand Rounds, we review methods for the analysis of the diversity of procedures among hospitals, the activities among anesthesia providers, etc. We apply multiple methods and consider their relative reliability and usefulness for perioperative applications, including calculations of SEs. We also review methods for comparing the similarity of procedures among hospitals, activities among anesthesia providers, etc. We again apply multiple methods and consider their relative reliability and usefulness for perioperative applications. The applications include strategic analyses (e.g., hospital marketing) and human resource analytics (e.g., comparisons among providers). Measures of diversity of procedures and activities (e.g., Herfindahl and Gini-Simpson index) are used for quantification of each facility (hospital) or anesthesia provider, one at a time. Diversity can be thought of as a summary measure. Thus, if the diversity of procedures for 48 hospitals is studied, the diversity (and its SE) is being calculated for each hospital. Likewise, the effective numbers of common procedures at each hospital can be calculated (e.g., by using the exponential of the Shannon index). Measures of similarity are pairwise assessments. Thus, if quantifying the similarity of procedures among cases with a break or handoff versus cases without a break or handoff, a similarity index represents a correlation coefficient. There are several different measures of similarity, and we compare their features and applicability for perioperative data. We rely extensively on sensitivity analyses to interpret observed values of the similarity index.
de Vries, Anna H; Muijtjens, Arno M M; van Genugten, Hilde G J; Hendrikx, Ad J M; Koldewijn, Evert L; Schout, Barbara M A; van der Vleuten, Cees P M; Wagner, Cordula; Tjiam, Irene M; van Merriënboer, Jeroen J G
2018-06-05
The current shift towards competency-based residency training has increased the need for objective assessment of skills. In this study, we developed and validated an assessment tool that measures technical and non-technical competency in transurethral resection of bladder tumour (TURBT). The 'Test Objective Competency' (TOCO)-TURBT tool was designed by means of cognitive task analysis (CTA), which included expert consensus. The tool consists of 51 items, divided into 3 phases: preparatory (n = 15), procedural (n = 21), and completion (n = 15). For validation of the TOCO-TURBT tool, 2 TURBT procedures were performed and videotaped by 25 urologists and 51 residents in a simulated setting. The participants' degree of competence was assessed by a panel of eight independent expert urologists using the TOCO-TURBT tool. Each procedure was assessed by two raters. Feasibility, acceptability and content validity were evaluated by means of a quantitative cross-sectional survey. Regression analyses were performed to assess the strength of the relation between experience and test scores (construct validity). Reliability was analysed by generalizability theory. The majority of assessors and urologists indicated the TOCO-TURBT tool to be a valid assessment of competency and would support the implementation of the TOCO-TURBT assessment as a certification method for residents. Construct validity was clearly established for all outcome measures of the procedural phase (all r > 0.5, p < 0.01). Generalizability-theory analysis showed high reliability (coefficient Phi ≥ 0.8) when using the format of two assessors and two cases. This study provides first evidence that the TOCO-TURBT tool is a feasible, valid and reliable assessment tool for measuring competency in TURBT. The tool has the potential to be used for future certification of competencies for residents and urologists. The methodology of CTA might be valuable in the development of assessment tools in other areas of clinical practice.
Meng, Jiang; Dong, Xiao-ping; Zhou, Yi-sheng; Jiang, Zhi-hong; Leung, Kelvin Sze-Yin; Zhao, Zhong-zhen
2007-02-01
To optimize the extraction procedure of essential oil from H. cordata using the SFE-CO2 and analyze the chemical composition of the essential oil. The extraction procedure of essential oil from fresh H. cordata was optimized with the orthogonal experiment. Essential oil of fresh H. cordata was analysed by GC-MS. The optimize preparative procedure was as follow: essential oil of H. cordata was extracted at a temperature of 35 degrees C, pressure of 15,000 kPa for 20 min. 38 chemical components were identified and the relative contents were quantified. The optimum preparative procedure is reliable and can guarantee the quality of essential oil.
[Surgical technique in patients with chronic pancreatitis].
Pronin, N A; Natalskiy, A A; Tarasenko, S V; Pavlov, A V; Fedoseev, V A
To justify ligation of vascular branches of anterior pancreateroduodenal arterial arch or gastroduodenal artery prior to bifurcation. This method was tested on sufficient clinical material: 147 patients with recurrent chronic pancreatitis. The interventions are presented by Frey, Beger procedures and its Berne variant. Comparative analysis showed reliable advantages of vascular ligation during pancreatectomy.
Perceived Uncertainty Sources in Wind Power Plant Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick R
This presentation for the Fourth Wind Energy Systems Engineering Workshop covers some of the uncertainties that still impact turbulent wind operation and how these affect design and structural reliability; identifies key sources and prioritization for R and D; and summarizes an analysis of current procedures, industry best practice, standards, and expert opinions.
Path Analysis on Educational Fiscal Decision-Making Mechanism in China
ERIC Educational Resources Information Center
Zhao, Hongbin; Sun, Baicai
2007-01-01
In China's current educational fiscal decision making, problems are as follows: no law to trust or not abiding by available laws, absence of equity and efficiency, as well as the standardization of decision-making procedures. It is necessary to set up effective fiscal decision-making mechanism in education and rationally devise reliable paths.
Investigating the Stability of Four Methods for Estimating Item Bias.
ERIC Educational Resources Information Center
Perlman, Carole L.; And Others
The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…
ERIC Educational Resources Information Center
Albanese, Mark A.; Jacobs, Richard M.
1990-01-01
The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…
Space solar array reliability: A study and recommendations
NASA Astrophysics Data System (ADS)
Brandhorst, Henry W., Jr.; Rodiek, Julie A.
2008-12-01
Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.
Kalwitzki, T; Huter, K; Runte, R; Breuninger, K; Janatzek, S; Gronemeyer, S; Gansweid, B; Rothgang, H
2017-03-01
Introduction: In the broad-based consortium project "Reha XI - Identifying rehabilitative requirements in medical service assessments: evaluation and implementation", a comprehensive analysis of the corresponding procedures was carried out by the medical services of the German Health Insurance Funds (MDK). On the basis of this analysis, a Good Practice Standard (GPS) for assessments was drawn up and scientifically evaluated. This article discusses the findings and applicability of the GPS as the basis for a nationwide standardized procedure in Germany as required by the Second Act to Strengthen Long-Term Care (PSG II) under Vol. XI Para. 18 (6) of the German Social Welfare Code. Method: The consortium project comprised four project phases: 1. Qualitative and quantitative situation analysis of the procedures for ascertaining rehabilitative needs in care assessments carried out by the MDK; 2. Development of a Good Practice Standard (GPS) in a structured, consensus-based procedure; 3. Scientific evaluation of the validity, reliability and practicability of the assessment procedure according to the GPS in the MDK's operational practice; 4. Survey of long-term care insurance funds with respect to the appropriateness of the rehabilitation recommendations drawn up by care assessors in line with the GPS for providing a qualified recommendation for the applicant. The evaluation carried out in the third project phase was subject to methodological limitations that may have given rise to distortions in the findings. Findings: On the basis of the situation analysis, 7 major thematic areas were identified in which improvements were implemented by applying the GPS. For the evaluation of the GPS, a total of 3 247 applicants were assessed in line with the GPS; in 6.3% of the applicants, an indication for medical rehabilitation was determined. The GPS procedure showed a high degree of reliability and practicability, but the values for the validity of the assessment procedure were highly unsatisfactory. The degree of acceptance by the long-term care insurance funds with respect to the recommendations for rehabilitation following the GPS procedure was high. Conclusion: The application of a general standard across all MDKs shows marked improvements in the quality of the assessment procedure and leads more frequently to the ascertainment of an indication for medical rehabilitation. The methodological problems and the unsatisfactory findings with respect to the validity of the assessors' decisions require further scientific scrutiny. © Georg Thieme Verlag KG Stuttgart · New York.
Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K
2016-06-01
Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p < 0.001). Within the pelvis, more errors were observed during dissection on the right side than the left (p = 0.03). Test-retest confirmed reliability (r = 0.97, p < 0.001). A significant correlation was observed between error frequency and mesorectal specimen quality (r s = 0.52, p = 0.02) and with blood loss (r s = 0.609, p = 0.004). OCHRA offers a valid and reliable method for evaluating technical performance of laparoscopic rectal surgery.
Reliable assessment of laparoscopic performance in the operating room using videotape analysis.
Chang, Lily; Hogle, Nancy J; Moore, Brianna B; Graham, Mark J; Sinanan, Mika N; Bailey, Robert; Fowler, Dennis L
2007-06-01
The Global Operative Assessment of Laparoscopic Skills (GOALS) is a valid assessment tool for objectively evaluating the technical performance of laparoscopic skills in surgery residents. We hypothesized that GOALS would reliably differentiate between an experienced (expert) and an inexperienced (novice) laparoscopic surgeon (construct validity) based on a blinded videotape review of a laparoscopic cholecystectomy procedure. Ten board-certified surgeons actively engaged in the practice and teaching of laparoscopy reviewed and evaluated the videotaped operative performance of one novice and one expert laparoscopic surgeon using GOALS. Each reviewer recorded a score for both the expert and the novice videotape reviews in each of the 5 domains in GOALS (depth perception, bimanual dexterity, efficiency, tissue handling, and overall competence). The scores for the expert and the novice were compared and statistically analyzed using single-factor analysis of variance (ANOVA). The expert scored significantly higher than the novice did in the domains of depth perception (p = .005), bimanual dexterity (p = .001), efficiency (p = .001), and overall competence ( p = .001). Interrater reliability for the reviewers of the novice tape was Cronbach alpha = .93 and the expert tape was Cronbach alpha = .87. There was no difference between the two for tissue handling. The Global Operative Assessment of Laparoscopic Skills is a valid, objective assessment tool for evaluating technical surgical performance when used to blindly evaluate an intraoperative videotape recording of a laparoscopic procedure.
Gariepy, Aileen M.; Creinin, Mitchell D.; Schwarz, Eleanor B.; Smith, Kenneth J.
2011-01-01
OBJECTIVE To estimate the probability of successful sterilization after hysteroscopic or laparoscopic sterilization procedure. METHODS An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. RESULTS In the base case analysis, the proportion of women having a successful sterilization procedure on first attempt is 99% for laparoscopic, 88% for hysteroscopic in the operating room and 87% for hysteroscopic in the office. The probability of having a successful sterilization procedure within one year is 99% with laparoscopic, 95% for hysteroscopic in the operating room, and 94% for hysteroscopic in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. CONCLUSIONS Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within one year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization. PMID:21775842
Gariepy, Aileen M; Creinin, Mitchell D; Schwarz, Eleanor B; Smith, Kenneth J
2011-08-01
To estimate the probability of successful sterilization after an hysteroscopic or laparoscopic sterilization procedure. An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. In the base case analysis, the proportion of women having a successful sterilization procedure on the first attempt is 99% for laparoscopic sterilization, 88% for hysteroscopic sterilization in the operating room, and 87% for hysteroscopic sterilization in the office. The probability of having a successful sterilization procedure within 1 year is 99% with laparoscopic sterilization, 95% for hysteroscopic sterilization in the operating room, and 94% for hysteroscopic sterilization in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within 1 year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization.
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
Reliability based fatigue design and maintenance procedures
NASA Technical Reports Server (NTRS)
Hanagud, S.
1977-01-01
A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.
Factor analysis of social skills inventory responses of Italians and Americans.
Galeazzi, Aldo; Franceschina, Emilio; Holmes, George R
2002-06-01
The Social Skills Inventory is a 90-item self-report procedure designed to measure social and communication skills. The inventory measures six dimensions, namely, Emotional Expressivity, Emotional Sensitivity, Emotional Control, Social Expressivity, Social Sensitivity, and Social Control. The Italian version was administered in several cities in Northern Italy to 500 Italian participants ranging in age from 15 to 59 years. Factor analysis appears to confirm the adequacy of the inventory for the Italian adult population. Results indicate strong similarities between the Italian and American populations with respect to the measure of social skills. Indexes of internal reliability and test-retest reliability are good for almost all subscales of the inventory, which should encourage the use of this inventory with Italian samples.
Pretty, Iain A; Maupomé, Gerardo
2004-04-01
Dentists are involved in diagnosing disease in every aspect of their clinical practice. A range of tests, systems, guides and equipment--which can be generally referred to as diagnostic procedures--are available to aid in diagnostic decision making. In this era of evidence-based dentistry, and given the increasing demand for diagnostic accuracy and properly targeted health care, it is important to assess the value of such diagnostic procedures. Doing so allows dentists to weight appropriately the information these procedures supply, to purchase new equipment if it proves more reliable than existing equipment or even to discard a commonly used procedure if it is shown to be unreliable. This article, the first in a 6-part series, defines several concepts used to express the usefulness of diagnostic procedures, including reliability and validity, and describes some of their operating characteristics (statistical measures of performance), in particular, specificity and sensitivity. Subsequent articles in the series will discuss the value of diagnostic procedures used in daily dental practice and will compare today's most innovative procedures with established methods.
Lamarão, Andressa M.; Costa, Lucíola C. M.; Comper, Maria L. C.; Padula, Rosimeire S.
2014-01-01
Background: Observational instruments, such as the Rapid Entire Body Assessment, quickly assess biomechanical risks present in the workplace. However, in order to use these instruments, it is necessary to conduct the translational/cross-cultural adaptation of the instrument and test its measurement properties. Objectives: To perform the translation and the cross-cultural adaptation to Brazilian-Portuguese and test the reliability of the REBA instrument. Method: The procedures of translation and cross-cultural adaptation to Brazilian-Portuguese were conducted following proposed guidelines that involved translation, synthesis of translations, back translation, committee review and testing of the pre-final version. In addition, reliability and the intra- and inter-rater percent agreement were obtained with the Linear Weighted Kappa Coefficient that was associated with the 95% Confidence Interval and the cross tabulation 2×2. Results : The procedures for translation and adaptation were adequate and the necessary adjustments were conducted on the instrument. The intra- and inter-rater reliability showed values of 0.104 to 0.504, respectively, ranging from very poor to moderate. The percentage agreement values ranged from 5.66% to 69.81%. The percentage agreement was closer to 100% at the item 'upper arm' (69.81%) for the Intra-rater 1 and at the items 'legs' and 'upper arm' for the Intra-rater 2 (62.26%). Conclusions: The processes of translation and cross-cultural adaptation were conducted on the REBA instrument and the Brazilian version of the instrument was obtained. However, despite the reliability of the tests used to correct the translated and adapted version, the reliability values are unacceptable according to the guidelines standard, indicating that the reliability must be re-evaluated. Therefore, caution in the interpretation of the biomechanical risks measured by this instrument should be taken. PMID:25003273
Criado-Fornelio, A; Buling, A; Barba-Carretero, J C
2009-02-01
We developed and validated a real-time polymerase chain reaction (PCR) assay using fluorescent hybridization probes and melting curve analysis to identify the PKD1 exon 29 (C-->A) mutation, which is implicated in polycystic kidney disease of cats. DNA was isolated from peripheral blood of 20 Persian cats. The employ of the new real-time PCR and melting curve analysis in these samples indicated that 13 cats (65%) were wild type homozygotes and seven cats (35%) were heterozygotes. Both PCR-RFLP and sequencing procedures were in full agreement with real-time PCR test results. Sequence analysis showed that the mutant gene had the expected base change compared to the wild type gene. The new procedure is not only very reliable but also faster than the techniques currently applied for diagnosis of the mutation.
Chang, Hing-Chiu; Bilgin, Ali; Bernstein, Adam; Trouard, Theodore P.
2018-01-01
Over the past several years, significant efforts have been made to improve the spatial resolution of diffusion-weighted imaging (DWI), aiming at better detecting subtle lesions and more reliably resolving white-matter fiber tracts. A major concern with high-resolution DWI is the limited signal-to-noise ratio (SNR), which may significantly offset the advantages of high spatial resolution. Although the SNR of DWI data can be improved by denoising in post-processing, existing denoising procedures may potentially reduce the anatomic resolvability of high-resolution imaging data. Additionally, non-Gaussian noise induced signal bias in low-SNR DWI data may not always be corrected with existing denoising approaches. Here we report an improved denoising procedure, termed diffusion-matched principal component analysis (DM-PCA), which comprises 1) identifying a group of (not necessarily neighboring) voxels that demonstrate very similar magnitude signal variation patterns along the diffusion dimension, 2) correcting low-frequency phase variations in complex-valued DWI data, 3) performing PCA along the diffusion dimension for real- and imaginary-components (in two separate channels) of phase-corrected DWI voxels with matched diffusion properties, 4) suppressing the noisy PCA components in real- and imaginary-components, separately, of phase-corrected DWI data, and 5) combining real- and imaginary-components of denoised DWI data. Our data show that the new two-channel (i.e., for real- and imaginary-components) DM-PCA denoising procedure performs reliably without noticeably compromising anatomic resolvability. Non-Gaussian noise induced signal bias could also be reduced with the new denoising method. The DM-PCA based denoising procedure should prove highly valuable for high-resolution DWI studies in research and clinical uses. PMID:29694400
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
Analysis of cost regression and post-accident absence
NASA Astrophysics Data System (ADS)
Wojciech, Drozd
2017-07-01
The article presents issues related with costs of work safety. It proves the thesis that economic aspects cannot be overlooked in effective management of occupational health and safety and that adequate expenditures on safety can bring tangible benefits to the company. Reliable analysis of this problem is essential for the description the problem of safety the work. In the article attempts to carry it out using the procedures of mathematical statistics [1, 2, 3].
The Use Of Computational Human Performance Modeling As Task Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacuqes Hugo; David Gertman
2012-07-01
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less
Morris, Marie C; Gallagher, Tom K; Ridgway, Paul F
2012-01-01
The objective was to systematically review the literature to identify and grade tools used for the end point assessment of procedural skills (e.g., phlebotomy, IV cannulation, suturing) competence in medical students prior to certification. The authors searched eight bibliographic databases electronically - ERIC, Medline, CINAHL, EMBASE, Psychinfo, PsychLIT, EBM Reviews and the Cochrane databases. Two reviewers independently reviewed the literature to identify procedural assessment tools used specifically for assessing medical students within the PRISMA framework, the inclusion/exclusion criteria and search period. Papers on OSATS and DOPS were excluded as they focused on post-registration assessment and clinical rather than simulated competence. Of 659 abstracted articles 56 identified procedural assessment tools. Only 11 specifically assessed medical students. The final 11 studies consisted of 1 randomised controlled trial, 4 comparative and 6 descriptive studies yielding 12 heterogeneous procedural assessment tools for analysis. Seven tools addressed four discrete pre-certification skills, basic suture (3), airway management (2), nasogastric tube insertion (1) and intravenous cannulation (1). One tool used a generic assessment of procedural skills. Two tools focused on postgraduate laparoscopic skills and one on osteopathic students and thus were not included in this review. The levels of evidence are low with regard to reliability - κ = 0.65-0.71 and minimum validity is achieved - face and content. In conclusion, there are no tools designed specifically to assess competence of procedural skills in a final certification examination. There is a need to develop standardised tools with proven reliability and validity for assessment of procedural skills competence at the end of medical training. Medicine graduates must have comparable levels of procedural skills acquisition entering the clinical workforce irrespective of the country of training.
Assessment of Change in Dynamic Psychotherapy
Høglend, Per; Bøgwald, Kjell-Petter; Amlo, Svein; Heyerdahl, Oscar; Sørbye, Øystein; Marble, Alice; Sjaastad, Mary Cosgrove; Bentsen, Håvard
2000-01-01
Five scales have been developed to assess changes that are consistent with the therapeutic rationales and procedures of dynamic psychotherapy. Seven raters evaluated 50 patients before and 36 patients again after brief dynamic psychotherapy. A factor analysis indicated that the scales represent a dimension that is discriminable from general symptoms. A summary measure, Dynamic Capacity, was rated with acceptable reliability by a single rater. However, average scores of three raters were needed for good reliability of change ratings. The scales seem to be sufficiently fine-grained to capture statistically and clinically significant changes during brief dynamic psychotherapy. PMID:11069131
TDRSS telecommunications system, PN code analysis
NASA Technical Reports Server (NTRS)
Dixon, R.; Gold, R.; Kaiser, F.
1976-01-01
The pseudo noise (PN) codes required to support the TDRSS telecommunications services are analyzed and the impact of alternate coding techniques on the user transponder equipment, the TDRSS equipment, and all factors that contribute to the acquisition and performance of these telecommunication services is assessed. Possible alternatives to the currently proposed hybrid FH/direct sequence acquisition procedures are considered and compared relative to acquisition time, implementation complexity, operational reliability, and cost. The hybrid FH/direct sequence technique is analyzed and rejected in favor of a recommended approach which minimizes acquisition time and user transponder complexity while maximizing probability of acquisition and overall link reliability.
Light aircraft crash safety program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.
1974-01-01
NASA is embarked upon research and development tasks aimed at providing the general aviation industry with a reliable crashworthy airframe design technology. The goals of the NASA program are: reliable analytical techniques for predicting the nonlinear behavior of structures; significant design improvements of airframes; and simulated full-scale crash test data. The analytical tools will include both simplified procedures for estimating energy absorption characteristics and more complex computer programs for analysis of general airframe structures under crash loading conditions. The analytical techniques being developed both in-house and under contract are described, and a comparison of some analytical predictions with experimental results is shown.
NASA Technical Reports Server (NTRS)
1973-01-01
The ALERT program, a system for communicating common problems with parts, materials, and processes, is condensed and catalogued. Expanded information on selected topics is provided by relating the problem area (failure) to the cause, the investigations and findings, the suggestions for avoidance (inspections, screening tests, proper part applications), and failure analysis procedures. The basic objective of ALERT is the avoidance of the recurrence of parts, materials, and processed problems, thus improving the reliability of equipment produced for and used by the government.
A Turkish Version of the Critical-Care Pain Observation Tool: Reliability and Validity Assessment.
Aktaş, Yeşim Yaman; Karabulut, Neziha
2017-08-01
The study aim was to evaluate the validity and reliability of the Critical-Care Pain Observation Tool in critically ill patients. A repeated measures design was used for the study. A convenience sample of 66 patients who had undergone open-heart surgery in the cardiovascular surgery intensive care unit in Ordu, Turkey, was recruited for the study. The patients were evaluated by using the Critical-Care Pain Observation Tool at rest, during a nociceptive procedure (suctioning), and 20 minutes after the procedure while they were conscious and intubated after surgery. The Turkish version of the Critical-Care Pain Observation Tool has shown statistically acceptable levels of validity and reliability. Inter-rater reliability was supported by moderate-to-high-weighted κ coefficients (weighted κ coefficient = 0.55 to 1.00). For concurrent validity, significant associations were found between the scores on the Critical-Care Pain Observation Tool and the Behavioral Pain Scale scores. Discriminant validity was also supported by higher scores during suctioning (a nociceptive procedure) versus non-nociceptive procedures. The internal consistency of the Critical-Care Pain Observation Tool was 0.72 during a nociceptive procedure and 0.71 during a non-nociceptive procedure. The validity and reliability of the Turkish version of the Critical-Care Pain Observation Tool was determined to be acceptable for pain assessment in critical care, especially for patients who cannot communicate verbally. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
De Haene, Lucia; Dalgaard, Nina Thorup; Montgomery, Edith; Grietens, Hans; Verschueren, Karine
2013-06-01
Although forced migration research on refugee family functioning clearly points to the potential breakdown of parental availability and responsiveness in the context of cumulative migration stressors, studies exploring attachment security in refugee children are surprisingly lacking so far. The authors report their findings from a 2-site, small-scale administration of an attachment measure, adapted for use with refugee children aged between 4 and 9 years from a reliable and validated doll-play procedure. We evaluated interrater reliability and conducted a qualitative analysis of refugee children's narrative response to identify migration-specific representational markers of attachment quality. The level of agreement among 3 independent coders ranged between .54 to 1.00 for both study samples, providing initial psychometric evidence of the measure's value in assessing child attachment security in this population. The exploratory analysis of migration-related narrative markers pointed to specific parameters to be used in parent-child observational assessments in future validation of the attachment measure, such as parental withdrawal or trauma-communication within the parent-child dyad. Copyright © 2013 International Society for Traumatic Stress Studies.
DOT National Transportation Integrated Search
2012-11-30
The objective of this project was to develop technical relationships between reliability improvement strategies and reliability performance metrics. This project defined reliability, explained the importance of travel time distributions for measuring...
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
An Analysis of the Ratings and Interrater Reliability of High School Band Contests
ERIC Educational Resources Information Center
Hash, Phillip M.
2012-01-01
The purpose of this study was to examine procedures for analyzing ratings of large-group festivals and provide data with which to compare results from similar events. Data consisted of ratings from senior division concert band contests sponsored by the South Carolina Band Directors Association from 2008 to 2010. Three concert-performance and two…
Thermal Adaptation Methods of Urban Plaza Users in Asia’s Hot-Humid Regions: A Taiwan Case Study
Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung
2015-01-01
Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis—Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)—were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung’s Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia. PMID:26516881
Eksborg, Staffan
2013-01-01
Pharmacokinetic studies are important for optimizing of drug dosing, but requires proper validation of the used pharmacokinetic procedures. However, simple and reliable statistical methods suitable for evaluation of the predictive performance of pharmacokinetic analysis are essentially lacking. The aim of the present study was to construct and evaluate a graphic procedure for quantification of predictive performance of individual and population pharmacokinetic compartment analysis. Original data from previously published pharmacokinetic compartment analyses after intravenous, oral, and epidural administration, and digitized data, obtained from published scatter plots of observed vs predicted drug concentrations from population pharmacokinetic studies using the NPEM algorithm and NONMEM computer program and Bayesian forecasting procedures, were used for estimating the predictive performance according to the proposed graphical method and by the method of Sheiner and Beal. The graphical plot proposed in the present paper proved to be a useful tool for evaluation of predictive performance of both individual and population compartment pharmacokinetic analysis. The proposed method is simple to use and gives valuable information concerning time- and concentration-dependent inaccuracies that might occur in individual and population pharmacokinetic compartment analysis. Predictive performance can be quantified by the fraction of concentration ratios within arbitrarily specified ranges, e.g. within the range 0.8-1.2.
Jippes, Mariëlle; Driessen, Erik W; Broers, Nick J; Majoor, Gerard D; Gijselaers, Wim H; van der Vleuten, Cees P M
2013-09-01
Because successful change implementation depends on organizational readiness for change, the authors developed and assessed the validity of a questionnaire, based on a theoretical model of organizational readiness for change, designed to measure, specifically, a medical school's organizational readiness for curriculum change (MORC). In 2012, a panel of medical education experts judged and adapted a preliminary MORC questionnaire through a modified Delphi procedure. The authors administered the resulting questionnaire to medical school faculty involved in curriculum change and tested the psychometric properties using exploratory and confirmatory factor analysis, and generalizability analysis. The mean relevance score of the Delphi panel (n = 19) reached 4.2 on a five-point Likert-type scale (1 = not relevant and 5 = highly relevant) in the second round, meeting predefined criteria for completing the Delphi procedure. Faculty (n = 991) from 131 medical schools in 56 countries completed MORC. Exploratory factor analysis yielded three underlying factors-motivation, capability, and external pressure-in 12 subscales with 53 items. The scale structure suggested by exploratory factor analysis was confirmed by confirmatory factor analysis. Cronbach alpha ranged from 0.67 to 0.92 for the subscales. Generalizability analysis showed that the MORC results of 5 to 16 faculty members can reliably evaluate a school's organizational readiness for change. MORC is a valid, reliable questionnaire for measuring organizational readiness for curriculum change in medical schools. It can identify which elements in a change process require special attention so as to increase the chance of successful implementation.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Aggregative Learning Method and Its Application for Communication Quality Evaluation
NASA Astrophysics Data System (ADS)
Akhmetov, Dauren F.; Kotaki, Minoru
2007-12-01
In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
Piepho, H P
1994-11-01
Multilocation trials are often used to analyse the adaptability of genotypes in different environments and to find for each environment the genotype that is best adapted; i.e. that is highest yielding in that environment. For this purpose, it is of interest to obtain a reliable estimate of the mean yield of a cultivar in a given environment. This article compares two different statistical estimation procedures for this task: the Additive Main Effects and Multiplicative Interaction (AMMI) analysis and Best Linear Unbiased Prediction (BLUP). A modification of a cross validation procedure commonly used with AMMI is suggested for trials that are laid out as a randomized complete block design. The use of these procedure is exemplified using five faba bean datasets from German registration trails. BLUP was found to outperform AMMI in four of five faba bean datasets.
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
18 CFR 39.3 - Electric Reliability Organization certification.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Electric Reliability... CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.3 Electric Reliability Organization certification. (a) Any...
18 CFR 39.3 - Electric Reliability Organization certification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Electric Reliability... CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.3 Electric Reliability Organization certification. (a) Any...
18 CFR 39.3 - Electric Reliability Organization certification.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Electric Reliability... CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.3 Electric Reliability Organization certification. (a) Any...
18 CFR 39.3 - Electric Reliability Organization certification.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Electric Reliability... CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.3 Electric Reliability Organization certification. (a) Any...
Suicide reporting content analysis: abstract development and reliability.
Gould, Madelyn S; Midle, Jennifer Bassett; Insel, Beverly; Kleinman, Marjorie
2007-01-01
Despite substantial research on media influences and the development of media guidelines on suicide reporting, research on the specifics of media stories that facilitate suicide contagion has been limited. The goal of the present study was to develop a content analytic strategy to code features in media suicide reports presumed to be influential in suicide contagion and determine the interrater reliability of the qualitative characteristics abstracted from newspaper stories. A random subset of 151 articles from a database of 1851 newspaper suicide stories published during 1988 through 1996, which were collected as part of a national study in the United States to identify factors associated with the initiation of youth suicide clusters, were evaluated. Using a well-defined content-analysis procedure, the agreement between raters in scoring key concepts of suicide reports from the headline, the pictorial presentation, and the text were evaluated. The results show that while the majority of variables in the content analysis were very reliable, assessed using the kappa statistic, and obtained excellent percentages of agreement, the reliability of complicated constructs, such as sensationalizing, glorifying, or romanticizing the suicide, was comparatively low. The data emphasize that before effective guidelines and responsible suicide reporting can ensue, further explication of suicide story constructs is necessary to ensure the implementation and compliance of responsible reporting on behalf of the media.
10 CFR 712.18 - Transferring HRP certification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Transferring HRP certification. 712.18 Section 712.18 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.18 Transferring HRP certification. (a) For HRP certification to be...
10 CFR 712.22 - Hearing officer's report and recommendation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Hearing officer's report and recommendation. 712.22 Section 712.22 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.22 Hearing officer's report and recommendation. Within...
An Examination of the True Reliability of Lower Limb Stiffness Measures During Overground Hopping.
Diggin, David; Anderson, Ross; Harrison, Andrew J
2016-06-01
Evidence suggests reports describing the reliability of leg-spring (kleg) and joint stiffness (kjoint) measures are contaminated by artifacts originating from digital filtering procedures. In addition, the intraday reliability of kleg and kjoint requires investigation. This study examined the effects of experimental procedures on the inter- and intraday reliability of kleg and kjoint. Thirty-two participants completed 2 trials of single-legged hopping at 1.5, 2.2, and 3.0 Hz at the same time of day across 3 days. On the final test day a fourth experimental bout took place 6 hours before or after participants' typical testing time. Kinematic and kinetic data were collected throughout. Stiffness was calculated using models of kleg and kjoint. Classifications of measurement agreement were established using thresholds for absolute and relative reliability statistics. Results illustrated that kleg and kankle exhibited strong agreement. In contrast, kknee and khip demonstrated weak-to-moderate consistency. Results suggest limits in kjoint reliability persist despite employment of appropriate filtering procedures. Furthermore, diurnal fluctuations in lower-limb muscle-tendon stiffness exhibit little effect on intraday reliability. The present findings support the existence of kleg as an attractor state during hopping, achieved through fluctuations in kjoint variables. Limits to kjoint reliability appear to represent biological function rather than measurement artifact.
Strategic planning decision making using fuzzy SWOT-TOPSIS with reliability factor
NASA Astrophysics Data System (ADS)
Mohamad, Daud; Afandi, Nur Syamimi; Kamis, Nor Hanimah
2015-10-01
Strategic planning is a process of decision making and action for long-term activities in an organization. The Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis has been commonly used to help organizations in strategizing their future direction by analyzing internal and external environment. However, SWOT analysis has some limitations as it is unable to prioritize appropriately the multiple alternative strategic decisions. Some efforts have been made to solve this problem by incorporating Multi Criteria Decision Making (MCDM) methods. Nevertheless, another important aspect has raised concerns on obtaining the decision that is the reliability of the information. Decision makers evaluate differently depending on their level of confidence or sureness in the evaluation. This study proposes a decision making procedure for strategic planning using SWOT-TOPSIS method by incorporating the reliability factor of the evaluation based on Z-number. An example using a local authority in the east coast of Malaysia is illustrated to determine the strategic options ranking and to prioritize factors in each SWOT category.
Davis, Matthew A Cody; Spriggs, Amy; Rodgers, Alexis; Campbell, Jonathan
2018-06-01
Deficits in social skills are often exhibited in individuals with comorbid Down syndrome (DS) and autism spectrum disorder (ASD), and there is a paucity of research to help guide intervention for this population. In the present study, a multiple probe study across behaviors, replicated across participants, assessed the effectiveness of peer-delivered simultaneous prompting in teaching socials skills to adults with DS-ASD using visual analysis techniques and Tau-U statistics to measure effect. Peer-mediators with DS and intellectual disability (ID) delivered simultaneous prompting sessions reliably (i.e., > 80% reliability) to teach social skills to adults with ID and a dual-diagnoses of DS-ASD with small (Tau Weighted = .55, 90% CI [.29, .82]) to medium effects (Tau Weighted = .75, 90% CI [.44, 1]). Statistical and visual analysis findings suggest a promising social skills intervention for individuals with DS-ASD as well as reliable delivery of simultaneous prompting procedures by individuals with DS.
The Chinese version of the Outcome Expectations for Exercise scale: validation study.
Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger
2011-06-01
Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out to see whether these results are generalisable to older Chinese people living in urban areas. Copyright © 2010 Elsevier Ltd. All rights reserved.
Interim reliability evaluation program, Browns Ferry 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1981-01-01
Probabilistic risk analysis techniques, i.e., event tree and fault tree analysis, were utilized to provide a risk assessment of the Browns Ferry Nuclear Plant Unit 1. Browns Ferry 1 is a General Electric boiling water reactor of the BWR 4 product line with a Mark 1 (drywell and torus) containment. Within the guidelines of the IREP Procedure and Schedule Guide, dominant accident sequences that contribute to public health and safety risks were identified and grouped according to release categories.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
Reproducibility of Automated Voice Range Profiles, a Systematic Literature Review.
Printz, Trine; Rosenberg, Tine; Godballe, Christian; Dyrvig, Anne-Kirstine; Grøntved, Ågot Møller
2018-05-01
Reliable voice range profiles are of great importance when measuring effects and side effects from surgery affecting voice capacity. Automated recording systems are increasingly used, but the reproducibility of results is uncertain. Our objective was to identify and review the existing literature on test-retest accuracy of the automated voice range profile assessment. Systematic review. PubMed, Scopus, Cochrane Library, ComDisDome, Embase, and CINAHL (EBSCO). We conducted a systematic literature search of six databases from 1983 to 2016. The following keywords were used: phonetogram, voice range profile, and acoustic voice analysis. Inclusion criteria were automated recording procedure, healthy voices, and no intervention between test and retest. Test-retest values concerning fundamental frequency and voice intensity were reviewed. Of 483 abstracts, 231 full-text articles were read, resulting in six articles included in the final results. The studies found high reliability, but data are few and heterogeneous. The reviewed articles generally reported high reliability of the voice range profile, and thus clinical usefulness, but uncertainty remains because of low sample sizes and different procedures for selecting, collecting, and analyzing data. More data are needed, and clinical conclusions must be drawn with caution. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials
NASA Technical Reports Server (NTRS)
Kushary, Debashis; Kulkarni, Pandurang M.
1995-01-01
We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.
Modeling reliability measurement of interface on information system: Towards the forensic of rules
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan
2018-02-01
Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.
General Staining and Segmentation Procedures for High Content Imaging and Analysis.
Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S
2018-01-01
Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.
10 CFR 712.21 - Office of Hearings and Appeals.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Office of Hearings and Appeals. 712.21 Section 712.21 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.21 Office of Hearings and Appeals. (a) The certification review hearing...
15 CFR 747.5 - SIRL application review process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... establish the reliability of the proposed parties to the application, it may deny the application, or modify... potential impact of the proposed transaction on the security situation in Iraq; and (v) The reliability of... SIRL procedure and may seek authorization under standard license procedures. (c) Validity period. SIRLs...
Reliability verification of vehicle speed estimate method in forensic videos.
Kim, Jong-Hyuk; Oh, Won-Taek; Choi, Ji-Hun; Park, Jong-Chan
2018-06-01
In various types of traffic accidents, including car-to-car crash, vehicle-pedestrian collision, and hit-and-run accident, driver overspeed is one of the critical issues of traffic accident analysis. Hence, analysis of vehicle speed at the moment of accident is necessary. The present article proposes a vehicle speed estimate method (VSEM) applying a virtual plane and a virtual reference line to a forensic video. The reliability of the VSEM was verified by comparing the results obtained by applying the VSEM to videos from a test vehicle driving with a global positioning system (GPS)-based Vbox speed. The VSEM verified by these procedures was applied to real traffic accident examples to evaluate the usability of the VSEM. Copyright © 2018 Elsevier B.V. All rights reserved.
Roberts, M A; Milich, R; Loney, J; Caputo, J
1981-09-01
The convergent and discriminant validities of three teacher rating scale measures of the traits of hyperactivity, aggression, and inattention were explored, using the multitrait-multimethod matrix approach of Campbell and Fiske (1959), as well as an analysis of variance procedure (Stanley, 1961). In the present study teachers rated children from their elementary school classrooms on the above traits. The results provided strong evidence for convergent validity. Data also indicated that these traits can be reliable differentiated by teachers, suggesting that research aimed at better understanding the unique contributions of hyperactivity, aggression, and inattention is warranted. The respective benefits of analyzing multitrait-multimethod matrices by employing the ANOVA procedure or by using the Campbell and Fiske (1959) criteria were discussed.
Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke
NASA Technical Reports Server (NTRS)
Yen, C. L.; Smith, D. B.
1973-01-01
A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.
A survey of automated methods for sensemaking support
NASA Astrophysics Data System (ADS)
Llinas, James
2014-05-01
Complex, dynamic problems in general present a challenge for the design of analysis support systems and tools largely because there is limited reliable a priori procedural knowledge descriptive of the dynamic processes in the environment. Problem domains that are non-cooperative or adversarial impute added difficulties involving suboptimal observational data and/or data containing the effects of deception or covertness. The fundamental nature of analysis in these environments is based on composite approaches involving mining or foraging over the evidence, discovery and learning processes, and the synthesis of fragmented hypotheses; together, these can be labeled as sensemaking procedures. This paper reviews and analyzes the features, benefits, and limitations of a variety of automated techniques that offer possible support to sensemaking processes in these problem domains.
Post-staining electroblotting for efficient and reliable peptide blotting.
Lee, Der-Yen; Chang, Geen-Dong
2015-01-01
Post-staining electroblotting has been previously described to transfer Coomassie blue-stained proteins from polyacrylamide gel onto polyvinylidene difluoride (PVDF) membranes. Actually, stained peptides can also be efficiently and reliably transferred. Because of selective staining procedures for peptides and increased retention of stained peptides on the membrane, even peptides with molecular masses less than 2 kDa such as bacitracin and granuliberin R are transferred with satisfactory results. For comparison, post-staining electroblotting is about 16-fold more sensitive than the conventional electroblotting for visualization of insulin on the membrane. Therefore, the peptide blots become practicable and more accessible to further applications, e.g., blot overlay detection or immunoblotting analysis. In addition, the efficiency of peptide transfer is favorable for N-terminal sequence analysis. With this method, peptide blotting can be normalized for further analysis such as blot overlay assay, immunoblotting, and N-terminal sequencing for identification of peptide in crude or partially purified samples.
Ang, Rebecca P; Chong, Wan Har; Huan, Vivien S; Yeo, Lay See
2007-01-01
This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer Concerns (5 items), Personal Concerns (6 items), and School Concerns (4 items). Initial estimates of convergent validity for ACM scores were also reported. The four-factor structure of ACM scores derived from Study 1 was confirmed via confirmatory factor analysis in Study 2 using a two-fold cross-validation procedure with a separate sample of 811 adolescents. Support was found for both the multidimensional and hierarchical models of adolescent concerns using the ACM. Internal consistency and test-retest reliability estimates were adequate for research purposes. ACM scores show promise as a reliable and potentially valid measure of Asian adolescents' concerns.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Foltran, Fabiana A; Silva, Luciana C C B; Sato, Tatiana O; Coury, Helenice J C G
2013-01-01
The recording of human movement is an essential requirement for biomechanical, clinical, and occupational analysis, allowing assessment of postural variation, occupational risks, and preventive programs in physical therapy and rehabilitation. The flexible electrogoniometer (EGM), considered a reliable and accurate device, is used for dynamic recordings of different joints. Despite these advantages, the EGM is susceptible to measurement errors, known as crosstalk. There are two known types of crosstalk: crosstalk due to sensor rotation and inherent crosstalk. Correction procedures have been proposed to correct these errors; however no study has used both procedures in clinical measures for wrist movements with the aim to optimize the correction. To evaluate the effects of mathematical correction procedures on: 1) crosstalk due to forearm rotation, 2) inherent sensor crosstalk; and 3) the combination of these two procedures. 43 healthy subjects had their maximum range of motion of wrist flexion/extension and ulnar/radials deviation recorded by EGM. The results were analyzed descriptively, and procedures were compared by differences. There was no significant difference in measurements before and after the application of correction procedures (P<0.05). Furthermore, the differences between the correction procedures were less than 5° in most cases, having little impact on the measurements. Considering the time-consuming data analysis, the specific technical knowledge involved, and the inefficient results, the correction procedures are not recommended for wrist recordings by EGM.
Bravo, G; Bragança, S; Arezes, P M; Molenbroek, J F M; Castellucci, H I
2018-05-22
Despite offering many benefits, direct manual anthropometric measurement method can be problematic due to their vulnerability to measurement errors. The purpose of this literature review was to determine, whether or not the currently published anthropometric studies of school children, related to ergonomics, mentioned or evaluated the variables precision, reliability or accuracy in the direct manual measurement method. Two bibliographic databases, and the bibliographic references of all the selected papers were used for finding relevant published papers in the fields considered in this study. Forty-six (46) studies met the criteria previously defined for this literature review. However, only ten (10) studies mentioned at least one of the analyzed variables, and none has evaluated all of them. Only reliability was assessed by three papers. Moreover, in what regards the factors that affect precision, reliability and accuracy, the reviewed papers presented large differences. This was particularly clear in the instruments used for the measurements, which were not consistent throughout the studies. Additionally, it was also clear that there was a lack of information regarding the evaluators' training and procedures for anthropometric data collection, which are assumed to be the most important issues that affect precision, reliability and accuracy. Based on the review of the literature, it was possible to conclude that the considered anthropometric studies had not focused their attention to the analysis of precision, reliability and accuracy of the manual measurement methods. Hence, and with the aim of avoiding measurement errors and misleading data, anthropometric studies should put more efforts and care on testing measurement error and defining the procedures used to collect anthropometric data.
Trouli, Marianna N; Vernon, Howard T; Kakavelakis, Kyriakos N; Antonopoulou, Maria D; Paganas, Aristofanis N; Lionis, Christos D
2008-07-22
Neck pain is a highly prevalent condition resulting in major disability. Standard scales for measuring disability in patients with neck pain have a pivotal role in research and clinical settings. The Neck Disability Index (NDI) is a valid and reliable tool, designed to measure disability in activities of daily living due to neck pain. The purpose of our study was the translation and validation of the NDI in a Greek primary care population with neck complaints. The original version of the questionnaire was used. Based on international standards, the translation strategy comprised forward translations, reconciliation, backward translation and pre-testing steps. The validation procedure concerned the exploration of internal consistency (Cronbach alpha), test-retest reliability (Intraclass Correlation Coefficient, Bland and Altman method), construct validity (exploratory factor analysis) and responsiveness (Spearman correlation coefficient, Standard Error of Measurement and Minimal Detectable Change) of the questionnaire. Data quality was also assessed through completeness of data and floor/ceiling effects. The translation procedure resulted in the Greek modified version of the NDI. The latter was culturally adapted through the pre-testing phase. The validation procedure raised a large amount of missing data due to low applicability, which were assessed with two methods. Floor or ceiling effects were not observed. Cronbach alpha was calculated as 0.85, which was interpreted as good internal consistency. Intraclass correlation coefficient was found to be 0.93 (95% CI 0.84-0.97), which was considered as very good test-retest reliability. Factor analysis yielded one factor with Eigenvalue 4.48 explaining 44.77% of variance. The Spearman correlation coefficient (0.3; P = 0.02) revealed some relation between the change score in the NDI and Global Rating of Change (GROC). The SEM and MDC were calculated as 0.64 and 1.78 respectively. The Greek version of the NDI measures disability in patients with neck pain in a reliable, valid and responsive manner. It is considered a useful tool for research and clinical settings in Greek Primary Health Care.
Trouli, Marianna N; Vernon, Howard T; Kakavelakis, Kyriakos N; Antonopoulou, Maria D; Paganas, Aristofanis N; Lionis, Christos D
2008-01-01
Background Neck pain is a highly prevalent condition resulting in major disability. Standard scales for measuring disability in patients with neck pain have a pivotal role in research and clinical settings. The Neck Disability Index (NDI) is a valid and reliable tool, designed to measure disability in activities of daily living due to neck pain. The purpose of our study was the translation and validation of the NDI in a Greek primary care population with neck complaints. Methods The original version of the questionnaire was used. Based on international standards, the translation strategy comprised forward translations, reconciliation, backward translation and pre-testing steps. The validation procedure concerned the exploration of internal consistency (Cronbach alpha), test-retest reliability (Intraclass Correlation Coefficient, Bland and Altman method), construct validity (exploratory factor analysis) and responsiveness (Spearman correlation coefficient, Standard Error of Measurement and Minimal Detectable Change) of the questionnaire. Data quality was also assessed through completeness of data and floor/ceiling effects. Results The translation procedure resulted in the Greek modified version of the NDI. The latter was culturally adapted through the pre-testing phase. The validation procedure raised a large amount of missing data due to low applicability, which were assessed with two methods. Floor or ceiling effects were not observed. Cronbach alpha was calculated as 0.85, which was interpreted as good internal consistency. Intraclass correlation coefficient was found to be 0.93 (95% CI 0.84–0.97), which was considered as very good test-retest reliability. Factor analysis yielded one factor with Eigenvalue 4.48 explaining 44.77% of variance. The Spearman correlation coefficient (0.3; P = 0.02) revealed some relation between the change score in the NDI and Global Rating of Change (GROC). The SEM and MDC were calculated as 0.64 and 1.78 respectively. Conclusion The Greek version of the NDI measures disability in patients with neck pain in a reliable, valid and responsive manner. It is considered a useful tool for research and clinical settings in Greek Primary Health Care. PMID:18647393
The reliability of a quality appraisal tool for studies of diagnostic reliability (QAREL).
Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Rickards, Luke; Turner, Robin; Bogduk, Nikolai
2013-09-09
The aim of this project was to investigate the reliability of a new 11-item quality appraisal tool for studies of diagnostic reliability (QAREL). The tool was tested on studies reporting the reliability of any physical examination procedure. The reliability of physical examination is a challenging area to study given the complex testing procedures, the range of tests, and lack of procedural standardisation. Three reviewers used QAREL to independently rate 29 articles, comprising 30 studies, published during 2007. The articles were identified from a search of relevant databases using the following string: "Reproducibility of results (MeSH) OR reliability (t.w.) AND Physical examination (MeSH) OR physical examination (t.w.)." A total of 415 articles were retrieved and screened for inclusion. The reviewers undertook an independent trial assessment prior to data collection, followed by a general discussion about how to score each item. At no time did the reviewers discuss individual papers. Reliability was assessed for each item using multi-rater kappa (κ). Multi-rater reliability estimates ranged from κ = 0.27 to 0.92 across all items. Six items were recorded with good reliability (κ > 0.60), three with moderate reliability (κ = 0.41 - 0.60), and two with fair reliability (κ = 0.21 - 0.40). Raters found it difficult to agree about the spectrum of patients included in a study (Item 1) and the correct application and interpretation of the test (Item 10). In this study, we found that QAREL was a reliable assessment tool for studies of diagnostic reliability when raters agreed upon criteria for the interpretation of each item. Nine out of 11 items had good or moderate reliability, and two items achieved fair reliability. The heterogeneity in the tests included in this study may have resulted in an underestimation of the reliability of these two items. We discuss these and other factors that could affect our results and make recommendations for the use of QAREL.
Some computational techniques for estimating human operator describing functions
NASA Technical Reports Server (NTRS)
Levison, W. H.
1986-01-01
Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.
Individual Differences in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. Joe; Ronald L. Boring
2014-06-01
While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research hasmore » shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.« less
Remans, Tony; Keunen, Els; Bex, Geert Jan; Smeets, Karen; Vangronsveld, Jaco; Cuypers, Ann
2014-10-01
Reverse transcription-quantitative PCR (RT-qPCR) has been widely adopted to measure differences in mRNA levels; however, biological and technical variation strongly affects the accuracy of the reported differences. RT-qPCR specialists have warned that, unless researchers minimize this variability, they may report inaccurate differences and draw incorrect biological conclusions. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines describe procedures for conducting and reporting RT-qPCR experiments. The MIQE guidelines enable others to judge the reliability of reported results; however, a recent literature survey found low adherence to these guidelines. Additionally, even experiments that use appropriate procedures remain subject to individual variation that statistical methods cannot correct. For example, since ideal reference genes do not exist, the widely used method of normalizing RT-qPCR data to reference genes generates background noise that affects the accuracy of measured changes in mRNA levels. However, current RT-qPCR data reporting styles ignore this source of variation. In this commentary, we direct researchers to appropriate procedures, outline a method to present the remaining uncertainty in data accuracy, and propose an intuitive way to select reference genes to minimize uncertainty. Reporting the uncertainty in data accuracy also serves for quality assessment, enabling researchers and peer reviewers to confidently evaluate the reliability of gene expression data. © 2014 American Society of Plant Biologists. All rights reserved.
10 CFR 712.19 - Removal from HRP.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program... immediately remove that individual from HRP duties pending a determination of the individual's reliability. A... HRP duties pending a determination of the individual's reliability is an interim, precautionary action...
Affordable MMICs for Air Force systems
NASA Astrophysics Data System (ADS)
Kemerley, Robert T.; Fayette, Daniel F.
1991-05-01
The paper deals with a program directed at demonstrating affordable MMIC chips - the microwave/mm-wave monolithic integrated circuit (MIMIC) program. Focus is placed on experiments involving the growth and characterization of III-V materials, and the design, fabrication, and evaluation of ICs in the 1 to 60 GHz frequency range, as well as efforts related to the reliability testing, failure analysis, and generation of qualified manufacture's list procedures for GaAs MMICs and modules. Attributes associated with GaAs-technology devices, quality, reliability, and performance in select environments are discussed, including the dependence of these structures over temperature ranges, electrostatic discharge sensitivity, and susceptibility to environmental stresses.
The pitfalls of hair analysis for toxicants in clinical practice: three case reports.
Frisch, Melissa; Schwartz, Brian S
2002-01-01
Hair analysis is used to assess exposure to heavy metals in patients presenting with nonspecific symptoms and is a commonly used procedure in patients referred to our clinic. We are frequently called on to evaluate patients who have health-related concerns as a result of hair analysis. Three patients first presented to outside physicians with nonspecific, multisystemic symptoms. A panel of analytes was measured in hair, and one or more values were interpreted as elevated. As a result of the hair analysis and other unconventional diagnostic tests, the patients presented to us believing they suffered from metal toxicity. In this paper we review the clinical efficacy of this procedure within the context of a patient population with somatic disorders and no clear risk factors for metal intoxication. We also review limitations of hair analysis in this setting; these limitations include patient factors such as low pretest probability of disease and test factors such as the lack of validation of analytic techniques, the inability to discern between exogenous contaminants and endogenous toxicants in hair, the variability of analytic procedures, low interlaboratory reliability, and the increased likelihood of false positive test results in the measurement of panels of analytes. PMID:11940463
Optimisation of nasal swab analysis by liquid scintillation counting.
Dai, Xiongxin; Liblong, Aaron; Kramer-Tremblay, Sheila; Priest, Nicholas; Li, Chunsheng
2012-06-01
When responding to an emergency radiological incident, rapid methods are needed to provide the physicians and radiation protection personnel with an early estimation of possible internal dose resulting from the inhalation of radionuclides. This information is needed so that appropriate medical treatment and radiological protection control procedures can be implemented. Nasal swab analysis, which employs swabs swiped inside a nostril followed by liquid scintillation counting of alpha and beta activity on the swab, could provide valuable information to quickly identify contamination of the affected population. In this study, various parameters (such as alpha/beta discrimination, swab materials, counting time and volume of scintillation cocktail etc) were evaluated in order to optimise the effectiveness of the nasal swab analysis method. An improved nasal swab procedure was developed by replacing cotton swabs with polyurethane-tipped swabs. Liquid scintillation counting was performed using a Hidex 300SL counter with alpha/beta pulse shape discrimination capability. Results show that the new method is more reliable than existing methods using cotton swabs and effectively meets the analysis requirements for screening personnel in an emergency situation. This swab analysis procedure is also applicable to wipe tests of surface contamination to minimise the source self-absorption effect on liquid scintillation counting.
Reliability analysis applied to structural tests
NASA Technical Reports Server (NTRS)
Diamond, P.; Payne, A. O.
1972-01-01
The application of reliability theory to predict, from structural fatigue test data, the risk of failure of a structure under service conditions because its load-carrying capability is progressively reduced by the extension of a fatigue crack, is considered. The procedure is applicable to both safe-life and fail-safe structures and, for a prescribed safety level, it will enable an inspection procedure to be planned or, if inspection is not feasible, it will evaluate the life to replacement. The theory has been further developed to cope with the case of structures with initial cracks, such as can occur in modern high-strength materials which are susceptible to the formation of small flaws during the production process. The method has been applied to a structure of high-strength steel and the results are compared with those obtained by the current life estimation procedures. This has shown that the conventional methods can be unconservative in certain cases, depending on the characteristics of the structure and the design operating conditions. The suitability of the probabilistic approach to the interpretation of the results from full-scale fatigue testing of aircraft structures is discussed and the assumptions involved are examined.
Toward an explicit analysis of generalization: A stimulus control interpretation
Kirby, Kimberly C.; Bickel, Warren K.
1988-01-01
Producing generality of treatment effects to new settings has been a critical concern for applied behavior analysts, but a systematic and reliable means of producing generality has yet to be provided. We argue that the principles of stimulus control and reinforcement underlie the production of most generalized effects; therefore, we suggest interpreting generalization programming in terms of stimulus control. The generalization programming procedures identified by Stokes and Baer (1977) are discussed in terms of both the stimulus control tactics explicitly identified and those that may be operating but are not explicitly identified. Our interpretation clarifies the critical components of Stokes and Baer's procedures and places greater emphasis on planning for generalization as a part of training procedures. PMID:22478006
Contingency interaction analysis in psychotherapy.
Canfield, M L; Walker, W R; Brown, L G
1991-02-01
This article introduces (a) a computerized coding procedure that rates words and utterances in terms of emotion, cognition, and contract and (b) a contingency method of analyzing verbal interactions. Using transcripts of sessions conducted by 3 master therapists with 1 client, the rating procedure and contingency correlation analyses supported the study's hypotheses. Therapists' utterances were characterized by significantly different amounts of emotion, cognition, and contracts, indicating that communication styles varied in the relative emphasis placed on these attributes. Differences suggest that the therapists responded differently to emotional, cognitive, and contract utterances and that the client's responses were different across the 3 therapist interviews. Split halves of the interviews within therapists and within client sessions were not different, providing further evidence of reliability of the coding and contingency procedures.
ERIC Educational Resources Information Center
Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.
2014-01-01
Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…
Launch vehicle systems design analysis
NASA Technical Reports Server (NTRS)
Ryan, Robert; Verderaime, V.
1993-01-01
Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levine, R.H.
1993-12-01
A variety of approaches has been used in the past to assess the environmental impact of anthropogenic contaminants. One reliable index for aquatic environments is the analysis of diatom species distribution; the focus in this case being on the Savannah River. The completed objectives of this study were: (A) the development and use of procedures for measuring diatom distribution in the water column and (B) the development and evaluation of sediment sampling methods for retrospective analysis.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
Use of Landsat data to predict the trophic state of Minnesota lakes
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Johnson, W. L.; Deuell, R. L.; Lindstrom, O. M.; Meisner, D. E.
1983-01-01
Near-concurrent Landsat Multispectral Scanner (MSS) and ground data were obtained for 60 lakes distributed in two Landsat scene areas. The ground data included measurement of secchi disk depth, chlorophyll-a, total phosphorous, turbidity, color, and total nitrogen, as well as Carlson Trophic State Index (TSI) values derived from the first three parameters. The Landsat data best correlated with the TSI values. Prediction models were developed to classify some 100 'test' lakes appearing in the two analysis scenes on the basis of TSI estimates. Clouds, wind, poor image data, small lake size, and shallow lake depth caused some problems in lake TSI prediction. Overall, however, the Landsat-predicted TSI estimates were judged to be very reliable for the secchi-derived TSI estimation, moderately reliable for prediction of the chlorophyll-a TSI, and unreliable for the phosphorous value. Numerous Landsat data extraction procedures were compared, and the success of the Landsat TSI prediction models was a strong function of the procedure employed.
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Cross-Cultural Validation of the Patient Perception of Integrated Care Survey.
Tietschert, Maike V; Angeli, Federica; van Raak, Arno J A; Ruwaard, Dirk; Singer, Sara J
2017-07-20
To test the cross-cultural validity of the U.S. Patient Perception of Integrated Care (PPIC) Survey in a Dutch sample using a standardized procedure. Primary data collected from patients of five primary care centers in the south of the Netherlands, through survey research from 2014 to 2015. Cross-sectional data collected from patients who saw multiple health care providers during 6 months preceding data collection. The PPIC survey includes 59 questions that measure patient perceived care integration across providers, settings, and time. Data analysis followed a standardized procedure guiding data preparation, psychometric analysis, and included invariance testing with the U.S. dataset. Latent scale structures of the Dutch and U.S. survey were highly comparable. Factor "Integration with specialist" had lower reliability scores and noninvariance. For the remaining factors, internal consistency and invariance estimates were strong. The standardized cross-cultural validation procedure produced strong support for comparable psychometric characteristics of the Dutch and U.S. surveys. Future research should examine the usability of the proposed procedure for contexts with greater cultural differences. © Health Research and Educational Trust.
Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne
2008-03-01
The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.
Towards an integrated quality control procedure for eddy-covariance data
NASA Astrophysics Data System (ADS)
Vitale, Domenico; Papale, Dario
2017-04-01
The eddy-covariance technique is nowadays the most reliable and direct way, allowing to calculate the main fluxes of Sensible and Latent Heat and of Net Ecosystem Exchange, this last being the result of the difference between the CO2 assimilated by photosynthetic activities and those released to the atmosphere through the ecosystem respiration processes. Despite the improvements in accuracy of measurement instruments and software development, the eddy-covariance technique is not suitable under non-ideal conditions respect to the instruments characteristics and the physical assumption behind the technique mainly related to the well-developed and stationary turbulence conditions. Under these conditions the calculated fluxes are not reliable and need to be flagged and discarded. In order to discover these unavoidable "bad" fluxes and build dataset with the highest quality, several tests applied both on high-frequency (10-20 Hz) raw data and on half-hourly times series have been developed in the past years. Nevertheless, there is an increasing need to develop a standardized quality control procedure suitable not only for the analysis of long-term data, but also for the near-real time data processing. In this paper, we review established quality assessment procedures and present an innovative quality control strategy with the purpose of integrating the existing consolidated procedures with robust and advanced statistical tests more suitable for the analysis of time series data. The performance of the proposed quality control strategy is evaluated both on simulated and EC data distributed by the ICOS research infrastructure. It is concluded that the proposed strategy is able to flag and exclude unrealistic fluxes while being reproducible and retaining the largest possible amount of high quality data.
Assessment of NDE Reliability Data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.
1976-01-01
Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.
Beard, J D; Marriott, J; Purdie, H; Crossley, J
2011-01-01
To compare user satisfaction and acceptability, reliability and validity of three different methods of assessing the surgical skills of trainees by direct observation in the operating theatre across a range of different surgical specialties and index procedures. A 2-year prospective, observational study in the operating theatres of three teaching hospitals in Sheffield. The assessment methods were procedure-based assessment (PBA), Objective Structured Assessment of Technical Skills (OSATS) and Non-technical Skills for Surgeons (NOTSS). The specialties were obstetrics and gynaecology (O&G) and upper gastrointestinal, colorectal, cardiac, vascular and orthopaedic surgery. Two to four typical index procedures were selected from each specialty. Surgical trainees were directly observed performing typical index procedures and assessed using a combination of two of the three methods (OSATS or PBA and NOTSS for O&G, PBA and NOTSS for the other specialties) by the consultant clinical supervisor for the case and the anaesthetist and/or scrub nurse, as well as one or more independent assessors from the research team. Information on user satisfaction and acceptability of each assessment method from both assessor and trainee perspectives was obtained from structured questionnaires. The reliability of each method was measured using generalisability theory. Aspects of validity included the internal structure of each tool and correlation between tools, construct validity, predictive validity, interprocedural differences, the effect of assessor designation and the effect of assessment on performance. Of the 558 patients who were consented, a total of 437 (78%) cases were included in the study: 51 consultant clinical supervisors, 56 anaesthetists, 39 nurses, 2 surgical care practitioners and 4 independent assessors provided 1635 assessments on 85 trainees undertaking the 437 cases. A total of 749 PBAs, 695 NOTSS and 191 OSATSs were performed. Non-O&G clinical supervisors and trainees provided mixed, but predominantly positive, responses about a range of applications of PBA. Most felt that PBA was important in surgical education, and would use it again in the future and did not feel that it added time to the operating list. The overall satisfaction of O&G clinical supervisors and trainees with OSATS was not as high, and a majority of those who used both preferred PBA. A majority of anaesthetists and nurses felt that NOTSS allowed them to rate interpersonal skills (communication, teamwork and leadership) more easily than cognitive skills (situation awareness and decision-making), that it had formative value and that it was a valuable adjunct to the assessment of technical skills. PBA demonstrated high reliability (G > 0.8 for only three assessor judgements on the same index procedure). OSATS had lower reliability (G > 0.8 for five assessor judgements on the same index procedure). Both were less reliable on a mix of procedures because of strong procedure-specific factors. A direct comparison of PBA between O&G and non-O&G cases showed a striking difference in reliability. Within O&G, a good level of reliability (G > 0.8) could not be obtained using a feasible number of assessments. Conversely, the reliability within non-O&G cases was exceptionally high, with only two assessor judgements being required. The reasons for this difference probably include the more summative purpose of assessment in O&G and the much higher proportion of O&G trainees in this study with training concerns (42% vs 4%). The reliability of NOTSS was lower than that for PBA. Reliability for the same procedure (G > 0.8) required six assessor judgements. However, as procedure-specific factors exerted a lesser influence on NOTSS, reliability on a mix of procedures could be achieved using only eight assessor judgements. NOTSS also demonstrated a valid internal structure. The strongest correlations between NOTSS and PBA or OSATS were in the 'decision-making' domain. PBA and NOTSS showed better construct validity than OSATS, the year of training and the number of recent index procedures performed being significant independent predictors of performance. There was little variation in scoring between different procedures or different designations of assessor. The results suggest that PBA is a reliable and acceptable method of assessing surgical skills, with good construct validity. Specialties that use OSATS may wish to consider changing the design or switching to PBA. Whatever workplace-based assessment method is used, the purpose, timing and frequency of assessment require detailed guidance. NOTSS is a promising tool for the assessment of non-technical skills, and surgical specialties may wish to consider its inclusion in their assessment framework. Further research is required into the use of health-care professionals other than consultant surgeons to assess trainees, the relationship between performance and experience, the educational impact of assessment and the additional value of video recording.
Quality management for space systems in ISRO
NASA Astrophysics Data System (ADS)
Satish, S.; Selva Raju, S.; Nanjunda Swamy, T. S.; Kulkarni, P. L.
2009-11-01
In a little over four decades, the Indian Space Program has carved a niche for itself with the unique application driven program oriented towards National development. The end-to-end capability approach of the space projects in the country call for innovative practices and procedures in assuring the quality and reliability of space systems. The System Reliability (SR) efforts initiated at the start of the projects continue during the entire life cycle of the project encompassing design, development, realisation, assembly, testing and integration and during launch. Even after the launch, SR groups participate in the on-orbit evaluation of transponders in communication satellites and camera systems in remote sensing satellites. SR groups play a major role in identification, evaluation and inculcating quality practices in work centres involved in the fabrication of mechanical, electronics and propulsion systems required for Indian Space Research Organization's (ISRO's) launch vehicle and spacecraft projects. Also the reliability analysis activities like prediction, assessment and demonstration as well as de-rating analysis, Failure Mode Effects and Criticality Analysis (FMECA) and worst-case analysis are carried out by SR groups during various stages of project realisation. These activities provide the basis for project management to take appropriate techno-managerial decisions to ensure that the required reliability goals are met. Extensive test facilities catering to the needs of the space program has been set up. A system for consolidating the experience and expertise gained for issue of standards called product assurance specifications to be used in all ISRO centres has also been established.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
The Scorer Reliability of Self-Scored Interest Inventories.
ERIC Educational Resources Information Center
O'Shea, Arthur J.; Harrington, Thomas F.
1980-01-01
Describes the procedures the authors of the System for Career Decision-Making (CDM) followed in establishing client scoring reliability. Authors recommend that manuals of self-scored inventories provide data establishing scorer reliability, that scoring be supervised, and that APGA test standards deal directly with scorer reliability. (Author)
18 CFR 39.6 - Conflict of a Reliability Standard with a Commission Order.
Code of Federal Regulations, 2010 CFR
2010-04-01
... FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT RULES CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.6 Conflict of a Reliability Standard with...
ERIC Educational Resources Information Center
Mashburn, Andrew J.; Meyer, J. Patrick; Allen, Joseph P.; Pianta, Robert C.
2014-01-01
Observational methods are increasingly being used in classrooms to evaluate the quality of teaching. Operational procedures for observing teachers are somewhat arbitrary in existing measures and vary across different instruments. To study the effect of different observation procedures on score reliability and validity, we conducted an experimental…
ERIC Educational Resources Information Center
Byars, Alvin Gregg
The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…
Davis, J.C.
2000-01-01
Geologists may feel that geological data are not amenable to statistical analysis, or at best require specialized approaches such as nonparametric statistics and geostatistics. However, there are many circumstances, particularly in systematic studies conducted for environmental or regulatory purposes, where traditional parametric statistical procedures can be beneficial. An example is the application of analysis of variance to data collected in an annual program of measuring groundwater levels in Kansas. Influences such as well conditions, operator effects, and use of the water can be assessed and wells that yield less reliable measurements can be identified. Such statistical studies have resulted in yearly improvements in the quality and reliability of the collected hydrologic data. Similar benefits may be achieved in other geological studies by the appropriate use of classical statistical tools.
Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs
2018-01-01
Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344
Measurements of 42 Wide CPM Pairs with a CCD
NASA Astrophysics Data System (ADS)
Harshaw, Richard
2015-11-01
This paper addresses the use of a Skyris 618C color CCD camera as a means of obtaining data for analysis in the measurement of wide common proper motion stars. The equipment setup is described and data collection procedure outlined. Results of the measures of 42 CPM stars are presented, showing the Skyris is a reliable device for the measurement of double stars.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
A rainwater harvesting system reliability model based on nonparametric stochastic rainfall generator
NASA Astrophysics Data System (ADS)
Basinger, Matt; Montalto, Franco; Lall, Upmanu
2010-10-01
SummaryThe reliability with which harvested rainwater can be used as a means of flushing toilets, irrigating gardens, and topping off air-conditioner serving multifamily residential buildings in New York City is assessed using a new rainwater harvesting (RWH) system reliability model. Although demonstrated with a specific case study, the model is portable because it is based on a nonparametric rainfall generation procedure utilizing a bootstrapped markov chain. Precipitation occurrence is simulated using transition probabilities derived for each day of the year based on the historical probability of wet and dry day state changes. Precipitation amounts are selected from a matrix of historical values within a moving 15 day window that is centered on the target day. RWH system reliability is determined for user-specified catchment area and tank volume ranges using precipitation ensembles generated using the described stochastic procedure. The reliability with which NYC backyard gardens can be irrigated and air conditioning units supplied with water harvested from local roofs exceeds 80% and 90%, respectively, for the entire range of catchment areas and tank volumes considered in the analysis. For RWH systems installed on the most commonly occurring rooftop catchment areas found in NYC (51-75 m 2), toilet flushing demand can be met with 7-40% reliability, with lower end of the range representing buildings with high flow toilets and no storage elements, and the upper end representing buildings that feature low flow fixtures and storage tanks of up to 5 m 3. When the reliability curves developed are used to size RWH systems to flush the low flow toilets of all multifamily buildings found a typical residential neighborhood in the Bronx, rooftop runoff inputs to the sewer system are reduced by approximately 28% over an average rainfall year, and potable water demand is reduced by approximately 53%.
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz; Babak, Stanislav; Krolak, Andrzej
We present data analysis methods used in the detection and estimation of parameters of gravitational-wave signals from the white dwarf binaries in the mock LISA data challenge. Our main focus is on the analysis of challenge 3.1, where the gravitational-wave signals from more than 6x10{sup 7} Galactic binaries were added to the simulated Gaussian instrumental noise. The majority of the signals at low frequencies are not resolved individually. The confusion between the signals is strongly reduced at frequencies above 5 mHz. Our basic data analysis procedure is the maximum likelihood detection method. We filter the data through the template bankmore » at the first step of the search, then we refine parameters using the Nelder-Mead algorithm, we remove the strongest signal found and we repeat the procedure. We detect reliably and estimate parameters accurately of more than ten thousand signals from white dwarf binaries.« less
2012-01-01
Background This study aimed to investigate the reliability and validity of the Iranian version of the Pediatric Quality of Life Inventory™ 4.0 (PedsQL™ 4.0) Generic Core Scales in children. Methods A standard forward and backward translation procedure was used to translate the US English version of the PedsQL™ 4.0 Generic Core Scales for children into the Iranian language (Persian). The Iranian version of the PedsQL™ 4.0 Generic Core Scales was completed by 503 healthy and 22 chronically ill children aged 8-12 years and their parents. The reliability was evaluated using internal consistency. Known-groups discriminant comparisons were made, and exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were conducted. Results The internal consistency, as measured by Cronbach's alpha coefficients, exceeded the minimum reliability standard of 0.70. All monotrait-multimethod correlations were higher than multitrait-multimethod correlations. The intraclass correlation coefficients (ICC) between the children self-report and parent proxy-reports showed moderate to high agreement. Exploratory factor analysis extracted six factors from the PedsQL™ 4.0 for both self and proxy reports, accounting for 47.9% and 54.8% of total variance, respectively. The results of the confirmatory factor analysis for 6-factor models for both self-report and proxy-report indicated acceptable fit for the proposed models. Regarding health status, as hypothesized from previous studies, healthy children reported significantly higher health-related quality of life than those with chronic illnesses. Conclusions The findings support the initial reliability and validity of the Iranian version of the PedsQL™ 4.0 as a generic instrument to measure health-related quality of life of children in Iran. PMID:22221765
Amiri, Parisa; Eslamian, Ghazaleh; Mirmiran, Parvin; Shiva, Niloofar; Jafarabadi, Mohammad Asghari; Azizi, Fereidoun
2012-01-05
This study aimed to investigate the reliability and validity of the Iranian version of the Pediatric Quality of Life Inventory™ 4.0 (PedsQL™ 4.0) Generic Core Scales in children. A standard forward and backward translation procedure was used to translate the US English version of the PedsQL™ 4.0 Generic Core Scales for children into the Iranian language (Persian). The Iranian version of the PedsQL™ 4.0 Generic Core Scales was completed by 503 healthy and 22 chronically ill children aged 8-12 years and their parents. The reliability was evaluated using internal consistency. Known-groups discriminant comparisons were made, and exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were conducted. The internal consistency, as measured by Cronbach's alpha coefficients, exceeded the minimum reliability standard of 0.70. All monotrait-multimethod correlations were higher than multitrait-multimethod correlations. The intraclass correlation coefficients (ICC) between the children self-report and parent proxy-reports showed moderate to high agreement. Exploratory factor analysis extracted six factors from the PedsQL™ 4.0 for both self and proxy reports, accounting for 47.9% and 54.8% of total variance, respectively. The results of the confirmatory factor analysis for 6-factor models for both self-report and proxy-report indicated acceptable fit for the proposed models. Regarding health status, as hypothesized from previous studies, healthy children reported significantly higher health-related quality of life than those with chronic illnesses. The findings support the initial reliability and validity of the Iranian version of the PedsQL™ 4.0 as a generic instrument to measure health-related quality of life of children in Iran.
Pérez-Castilla, Alejandro; García-Ramos, Amador
2018-07-01
Pérez-Castilla, A and García-Ramos, A. Evaluation of the most reliable procedure of determining jump height during the loaded countermovement jump exercise: Take-off velocity vs. flight time. J Strength Cond Res 32(7): 2025-2030, 2018-This study aimed to compare the reliability of jump height between the 2 standard procedures of analyzing force-time data (take-off velocity [TOV] and flight time [FT]) during the loaded countermovement (CMJ) exercise performed with a free-weight barbell and in a Smith machine. The jump height of 17 men (age: 22.2 ± 2.2 years, body mass: 75.2 ± 7.1 kg, and height: 177.0 ± 6.0 cm) was tested in 4 sessions (twice for each CMJ type) against external loads of 17, 30, 45, 60, and 75 kg. Jump height reliability was comparable between the TOV (coefficient of variation [CV]: 6.42 ± 2.41%) and FT (CV: 6.53 ± 2.17%) during the free-weight CMJ, but it was higher for the FT when the CMJ was performed in a Smith machine (CV: 11.34 ± 3.73% for TOV and 5.95 ± 1.12% for FT). Bland-Altman plots revealed trivial differences (≤0.27 cm) and no heteroscedasticity of the errors (R ≤ 0.09) for the jump height obtained by the TOV and FT procedures, whereas the random error between both procedures was higher for the CMJ performed in the Smith machine (2.02 cm) compared with the free-weight barbell (1.26 cm). Based on these results, we recommend the FT procedure to determine jump height during the loaded CMJ performed in a Smith machine, whereas the TOV and FT procedures provide similar reliability during the free-weight CMJ.
Grigg, Josephine; Haakonssen, Eric; Rathbone, Evelyne; Orr, Robin; Keogh, Justin W L
2017-11-13
The aim of this study was to quantify the validity and intra-tester reliability of a novel method of kinematic measurement. The measurement target was the joint angles of an athlete performing a BMX Supercross (SX) gate start action through the first 1.2 s of movement in situ on a BMX SX ramp using a standard gate start procedure. The method employed GoPro® Hero 4 Silver (GoPro Inc., USA) cameras capturing data at 120 fps 720 p on a 'normal' lens setting. Kinovea 0.8.15 (Kinovea.org, France) was used for analysis. Tracking data was exported and angles computed in Matlab (Mathworks®, USA). The gold standard 3D method for joint angle measurement could not safely be employed in this environment, so a rigid angle was used. Validity was measured to be within 2°. Intra-tester reliability was measured by the same tester performing the analysis twice with an average of 55 days between analyses. Intra-tester reliability was high, with an absolute error <6° and <9 frames (0.075 s) across all angles and time points for key positions, respectively. The methodology is valid within 2° and reliable within 6° for the calculation of joint angles in the first ~1.25 s.
2014-01-01
Background Multiple mini-interviews (MMIs) are a valuable tool in medical school selection due to their broad acceptance and promising psychometric properties. With respect to the high expenses associated with this procedure, the discussion about its feasibility should be extended to cost-effectiveness issues. Methods Following a pilot test of MMIs for medical school admission at Hamburg University in 2009 (HAM-Int), we took several actions to improve reliability and to reduce costs of the subsequent procedure in 2010. For both years, we assessed overall and inter-rater reliabilities based on multilevel analyses. Moreover, we provide a detailed specification of costs, as well as an extrapolation of the interrelation of costs, reliability, and the setup of the procedure. Results The overall reliability of the initial 2009 HAM-Int procedure with twelve stations and an average of 2.33 raters per station was ICC=0.75. Following the improvement actions, in 2010 the ICC remained stable at 0.76, despite the reduction of the process to nine stations and 2.17 raters per station. Moreover, costs were cut down from $915 to $495 per candidate. With the 2010 modalities, we could have reached an ICC of 0.80 with 16 single rater stations ($570 per candidate). Conclusions With respect to reliability and cost-efficiency, it is generally worthwhile to invest in scoring, rater training and scenario development. Moreover, it is more beneficial to increase the number of stations instead of raters within stations. However, if we want to achieve more than 80 % reliability, a minor improvement is paid with skyrocketing costs. PMID:24645665
Landslide risk models for decision making.
Bonachea, Jaime; Remondo, Juan; de Terán, José Ramón Díaz; González-Díez, Alberto; Cendrero, Antonio
2009-11-01
This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.
Dévier, Marie-Hélène; Le Menach, Karyn; Viglino, Liza; Di Gioia, Lodovico; Lachassagne, Patrick; Budzinski, Hélène
2013-01-15
The aim of this work was to investigate the potential presence of a broad range of organic compounds, such as hormones, alkylphenols, bisphenol A and phthalates, as well as pharmaceutical substances in two brands of bottled natural mineral waters (Evian and Volvic, Danone). The phthalates were determined by solid-phase microextraction coupled to gas chromatography-mass spectrometry (SPME-GC-MS) and the other compounds by liquid chromatography-tandem mass spectrometry (LC-MS/MS) or gas chromatography-mass spectrometry (GC-MS) after solid-phase extraction. The potential migration of alkylphenols, bisphenol A and phthalates from polyethylene terephthalate (PET) bottles was also investigated under standardized test conditions. Evian and Volvic natural mineral waters contain none of the around 120 targeted organic compounds. Traces of 3 pharmaceuticals (ketoprofen, salicylic acid, and caffeine), 3 alkylphenols (4-nonylphenol, 4-t-octylphenol, and 4-nonylphenol diethoxylate), and some phthalates including di(2-ethylhexyl)phthalate (DEHP) were detected in the samples, but they were also present in the procedural blanks at similar levels. The additional test procedures demonstrated that the few detected compounds originated from the background laboratory contamination. Analytical procedures have been designed both in the bottling factory and in the laboratory in order to investigate the sources of DEHP and to minimize to the maximum this unavoidable laboratory contamination. It was evidenced that no migration of the targeted compounds from bottles occurred under the test conditions. The results obtained in this study underline the complexity of reaching a reliable measure to qualify the contamination of a sample at ultra-trace level, in the field of very pure matrices. The analytical procedures involving glassware, equipment, hoods, and rooms specifically dedicated to trace analysis allowed us to reach reliable procedural limits of quantification at the ng/L level, by lowering the background laboratory contamination. Copyright © 2012 Elsevier B.V. All rights reserved.
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
NASA Astrophysics Data System (ADS)
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
Stepwise Iterative Fourier Transform: The SIFT
NASA Technical Reports Server (NTRS)
Benignus, V. A.; Benignus, G.
1975-01-01
A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.
Reliability of anthropometric measurements in European preschool children: the ToyBox-study.
De Miguel-Etayo, P; Mesana, M I; Cardon, G; De Bourdeaudhuij, I; Góźdź, M; Socha, P; Lateva, M; Iotova, V; Koletzko, B V; Duvinage, K; Androutsos, O; Manios, Y; Moreno, L A
2014-08-01
The ToyBox-study aims to develop and test an innovative and evidence-based obesity prevention programme for preschoolers in six European countries: Belgium, Bulgaria, Germany, Greece, Poland and Spain. In multicentre studies, anthropometric measurements using standardized procedures that minimize errors in the data collection are essential to maximize reliability of measurements. The aim of this paper is to describe the standardization process and reliability (intra- and inter-observer) of height, weight and waist circumference (WC) measurements in preschoolers. All technical procedures and devices were standardized and centralized training was given to the fieldworkers. At least seven children per country participated in the intra- and inter-observer reliability testing. Intra-observer technical error ranged from 0.00 to 0.03 kg for weight and from 0.07 to 0.20 cm for height, with the overall reliability being above 99%. A second training was organized for WC due to low reliability observed in the first training. Intra-observer technical error for WC ranged from 0.12 to 0.71 cm during the first training and from 0.05 to 1.11 cm during the second training, and reliability above 92% was achieved. Epidemiological surveys need standardized procedures and training of researchers to reduce measurement error. In the ToyBox-study, very good intra- and-inter-observer agreement was achieved for all anthropometric measurements performed. © 2014 World Obesity.
NASA Astrophysics Data System (ADS)
Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo
2006-08-01
Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.
Novel composites for wing and fuselage applications
NASA Technical Reports Server (NTRS)
Sobel, L. H.; Buttitta, C.; Suarez, J. A.
1995-01-01
Probabilistic predictions based on the IPACS code are presented for the material and structural response of unnotched and notched, IM6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is judged poor because IPACS did not have a progressive failure capability at the time this work was performed. The report also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.
In Vivo Myeloperoxidase Imaging and Flow Cytometry Analysis of Intestinal Myeloid Cells.
Hülsdünker, Jan; Zeiser, Robert
2016-01-01
Myeloperoxidase (MPO) imaging is a non-invasive method to detect cells that produce the enzyme MPO that is most abundant in neutrophils, macrophages, and inflammatory monocytes. While lacking specificity for any of these three cell types, MPO imaging can provide guidance for further flow cytometry-based analysis of tissues where these cell types reside. Isolation of leukocytes from the intestinal tract is an error-prone procedure. Here, we describe a protocol for intestinal leukocyte isolation that works reliable in our hands and allows for flow cytometry-based analysis, in particular of neutrophils.
The NASA Monographs on Shell Stability Design Recommendations: A Review and Suggested Improvements
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.; Starnes, James H., Jr.
1998-01-01
A summary of existing NASA design criteria monographs for the design of buckling-resistant thin-shell structures is presented. Subsequent improvements in the analysis for nonlinear shell response are reviewed, and current issues in shell stability analysis are discussed. Examples of nonlinear shell responses that are not included in the existing shell design monographs are presented, and an approach for including reliability based analysis procedures in the shell design process is discussed. Suggestions for conducting future shell experiments are presented, and proposed improvements to the NASA shell design criteria monographs are discussed.
The NASA Monographs on Shell Stability Design Recommendations: A Review and Suggested Improvements
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.; Starnes, James H., Jr.
1998-01-01
A summary of the existing NASA design criteria monographs for the design of buckling-resistant thin-shell structures is presented. Subsequent improvements in the analysis for nonlinear shell response are reviewed, and current issues in shell stability analysis are discussed. Examples of nonlinear shell responses that are not included in the existing shell design monographs are presented, and an approach for including reliability-based analysis procedures in the shell design process is discussed. Suggestions for conducting future shell experiments are presented, and proposed improvements to the NASA shell design criteria monographs are discussed.
Inter-rater reliability of select physical examination procedures in patients with neck pain.
Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A
2014-07-01
This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.
ERIC Educational Resources Information Center
Blagov, Pavel S.; Bi, Wu; Shedler, Jonathan; Westen, Drew
2012-01-01
The Shedler-Westen Assessment Procedure (SWAP) is a personality assessment instrument designed for use by expert clinical assessors. Critics have raised questions about its psychometrics, most notably its validity across observers and situations, the impact of its fixed score distribution on research findings, and its test-retest reliability. We…
Counting pollen grains using readily available, free image processing and analysis software.
Costa, Clayton M; Yang, Suann
2009-10-01
Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.
Description of MSFC engineering photographic analysis
NASA Technical Reports Server (NTRS)
Earle, Jim; Williams, Frank
1988-01-01
Utilizing a background that includes development of basic launch and test photographic coverage and analysis procedures, the MSFC Photographic Evaluation Group has built a body of experience that enables it to effectively satisfy MSFC's engineering photographic analysis needs. Combining the basic soundness of reliable, proven techniques of the past with the newer technical advances of computers and computer-related devices, the MSFC Photo Evaluation Group is in a position to continue to provide photo and video analysis service center-wide and NASA-wide to supply an improving photo analysis product to meet the photo evaluation needs of the future; and to provide new standards in the state-of-the-art of photo analysis of dynamic events.
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1996-01-01
This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.
Stey, Anne M; Ko, Clifford Y; Hall, Bruce Lee; Louie, Rachel; Lawson, Elise H; Gibbons, Melinda M; Zingmond, David S; Russell, Marcia M
2014-08-01
Identifying iatrogenic injuries using existing data sources is important for improved transparency in the occurrence of intraoperative events. There is evidence that procedure codes are reliably recorded in claims data. The objective of this study was to assess whether concurrent splenic procedure codes in patients undergoing colectomy procedures are reliably coded in claims data as compared with clinical registry data. Patients who underwent colectomy procedures in the absence of neoplastic diagnosis codes were identified from American College of Surgeons (ACS) NSQIP data linked with Medicare inpatient claims data file (2005 to 2008). A κ statistic was used to assess coding concordance between ACS NSQIP and Medicare inpatient claims, with ACS NSQIP serving as the reference standard. A total of 11,367 colectomy patients were identified from 212 hospitals. There were 114 patients (1%) who had a concurrent splenic procedure code recorded in either ACS NSQIP or Medicare inpatient claims. There were 7 patients who had a splenic injury diagnosis code recorded in either data source. Agreement of splenic procedure codes between the data sources was substantial (κ statistic 0.72; 95% CI, 0.64-0.79). Medicare inpatient claims identified 81% of the splenic procedure codes recorded in ACS NSQIP, and 99% of the patients without a splenic procedure code. It is feasible to use Medicare claims data to identify splenic injuries occurring during colectomy procedures, as claims data have moderate sensitivity and excellent specificity for capturing concurrent splenic procedure codes compared with ACS NSQIP. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Castellarin, A.; Montanari, A.; Brath, A.
2002-12-01
The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.
Kramp, Kelvin H; van Det, Marc J; Hoff, Christiaan; Lamme, Bas; Veeger, Nic J G M; Pierie, Jean-Pierre E N
2015-01-01
Global Operative Assessment of Laparoscopic Skills (GOALS) assessment has been designed to evaluate skills in laparoscopic surgery. A longitudinal blinded study of randomized video fragments was conducted to estimate the validity and reliability of GOALS in novice trainees. In total, 10 trainees each performed 6 consecutive laparoscopic cholecystectomies. Sixty procedures were recorded on video. Video fragments of (1) opening of the peritoneum; (2) dissection of Calot's triangle and achievement of critical view of safety; and (3) dissection of the gallbladder from the liver bed were blinded, randomized, and rated by 2 consultant surgeons using GOALS. Also, a grade was given for overall competence. The correlation of GOALS with live observation Objective Structured Assessment of Technical Skills (OSATS) scores was calculated. Construct validity was estimated using the Friedman 2-way analysis of variance by ranks and the Wilcoxon signed-rank test. The interrater reliability was calculated using the absolute and consistency agreement 2-way random-effects model intraclass correlation coefficient. A high correlation was found between mean GOALS score (r = 0.879, p = 0.021) and mean OSATS score. The GOALS score increased significantly across the 6 procedures (p = 0.002). The trainees performed significantly better on their sixth when compared with their first cholecystectomy (p = 0.004). The consistency agreement interrater reliability was 0.37 for the mean GOALS score (p = 0.002) and 0.55 for overall competence (p < 0.001) of the 3 video fragments. The validity observed in this randomized blinded longitudinal study supports the existing evidence that GOALS is a valid tool for assessment of novice trainees. A relatively low reliability was found in this study. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
18 CFR 39.8 - Delegation to a Regional Entity.
Code of Federal Regulations, 2010 CFR
2010-04-01
... agreement promotes effective and efficient administration of Bulk-Power System reliability. (d) The... Interconnection-wide basis promotes effective and efficient administration of Bulk-Power System reliability and... THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT...
Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R.; McDaniel, Reuben R.
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them. PMID:24349771
Anderson, Ruth A; Plowman, Donde; Corazzini, Kirsten; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R; McDaniel, Reuben R
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them.
Statistical Models and Inference Procedures for Structural and Materials Reliability
1990-12-01
as an official Department of the Army positio~n, policy, or decision, unless sD designated by other documentazion. 12a. DISTRIBUTION /AVAILABILITY...Some general stress-strength models were also developed and applied to the failure of systems subject to cyclic loading. Involved in the failure of...process control ideas and sequential design and analysis methods. Finally, smooth nonparametric quantile .wJ function estimators were studied. All of
OPTICAL FIBRES AND FIBREOPTIC SENSORS: Polarisation reflectometry of anisotropic optical fibres
NASA Astrophysics Data System (ADS)
Konstantinov, Yurii A.; Kryukov, Igor'I.; Pervadchuk, Vladimir P.; Toroshin, Andrei Yu
2009-11-01
Anisotropic, polarisation-maintaining fibres have been studied using a reflectometer and integrated optic polariser. Linearly polarised pulses were launched into the fibre under test at different angles between their plane of polarisation and the main optical axis of the fibre. A special procedure for the correlation analysis of these reflectograms is developed to enhance the reliability of the information about the longitudinal optical uniformity ofanisotropic fibres.
NASA Astrophysics Data System (ADS)
Aviv, O.; Lipshtat, A.
2018-05-01
On-Site Inspection (OSI) activities under the Comprehensive Nuclear-Test-Ban Treaty (CTBT) allow limitations to measurement equipment. Thus, certain detectors require modifications to be operated in a restricted mode. The accuracy and reliability of results obtained by a restricted device may be impaired. We present here a method for limiting data acquisition during OSI. Limitations are applied to a high-resolution high-purity germanium detector system, where the vast majority of the acquired data that is not relevant to the inspection is filtered out. The limited spectrum is displayed to the user and allows analysis using standard gamma spectrometry procedures. The proposed method can be incorporated into commercial gamma-ray spectrometers, including both stationary and mobile-based systems. By applying this procedure to more than 1000 spectra, representing various scenarios, we show that partial data are sufficient for reaching reliable conclusions. A comprehensive survey of potential false-positive identifications of various radionuclides is presented as well. It is evident from the results that the analysis of a limited spectrum is practically identical to that of a standard spectrum in terms of detection and quantification of OSI-relevant radionuclides. A future limited system can be developed making use of the principles outlined by the suggested method.
Net, Sopheak; Delmont, Anne; Sempéré, Richard; Paluselli, Andrea; Ouddane, Baghdad
2015-05-15
Because of their widespread application, phthalates or phthalic acid esters (PAEs) are ubiquitous in the environment. Their presence has attracted considerable attention due to their potential impacts on ecosystem functioning and on public health, so their quantification has become a necessity. Various extraction procedures as well as gas/liquid chromatography and mass spectrometry detection techniques are found as suitable for reliable detection of such compounds. However, PAEs are ubiquitous in the laboratory environment including ambient air, reagents, sampling equipment, and various analytical devices, that induces difficult analysis of real samples with a low PAE background. Therefore, accurate PAE analysis in environmental matrices is a challenging task. This paper reviews the extensive literature data on the techniques for PAE quantification in natural media. Sampling, sample extraction/pretreatment and detection for quantifying PAEs in different environmental matrices (air, water, sludge, sediment and soil) have been reviewed and compared. The concept of "green analytical chemistry" for PAE determination is also discussed. Moreover useful information about the material preparation and the procedures of quality control and quality assurance are presented to overcome the problem of sample contamination and these encountered due to matrix effects in order to avoid overestimating PAE concentrations in the environment. Copyright © 2015 Elsevier B.V. All rights reserved.
Analysis of an experiment aimed at improving the reliability of transmission centre shafts.
Davis, T P
1995-01-01
Smith (1991) presents a paper proposing the use of Weibull regression models to establish dependence of failure data (usually times) on covariates related to the design of the test specimens and test procedures. In his article Smith made the point that good experimental design was as important in reliability applications as elsewhere, and in view of the current interest in design inspired by Taguchi and others, we pay some attention in this article to that topic. A real case study from the Ford Motor Company is presented. Our main approach is to utilize suggestions in the literature for applying standard least squares techniques of experimental analysis even when there is likely to be nonnormal error, and censoring. This approach lacks theoretical justification, but its appeal is its simplicity and flexibility. For completeness we also include some analysis based on the proportional hazards model, and in an attempt to link back to Smith (1991), look at a Weibull regression model.
Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M
2014-12-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.
Surgical swab counting: a qualitative analysis from the perspective of the scrub nurse.
D'Lima, D; Sacks, M; Blackman, W; Benn, J
2014-05-01
The aim of the study was to conduct a qualitative exploration of the sociotechnical processes underlying retained surgical swabs, and to explore the fundamental reasons why the swab count procedure and related protocols fail in practice. Data was collected through a set of 27 semistructured qualitative interviews with scrub nurses from a large, multi-site teaching hospital. Interview transcripts were analysed using established constant comparative methods, moving between inductive and deductive reasoning. Key findings were associated with interprofessional perspectives, team processes and climate and responsibility for the swab count. The analysis of risk factors revealed that perceived social and interprofessional issues played a significant role in the reliability of measures to prevent retained swabs. This work highlights the human, psychological and organisational factors that impact upon the reliability of the process and gives rise to recommendations to address contextual factors and improve perioperative practice and training.
Can a bronchoscopist reliably assess a patient's experience of bronchoscopy?
Hadzri, HM; Azarisman, SMS; Fauzi, ARM; Roslan, H; Roslina, AM; Adina, ATN; Fauzi, MA
2010-01-01
Objectives Bronchoscopy is an essential investigative tool in many respiratory complaints. The procedure can be unpleasant for both bronchoscopists and patients. To the best of our knowledge, there are only a few studies that correlate the bronchoscopist's satisfaction with that of the patient's during bronchoscopy. The aim of our study is to assess whether or not a bronchoscopist could reliably assess a patient's satisfaction during bronchoscopy. Design Cross-sectional, observational study with convenience sampling. Setting Patients attending flexible fibreoptic bronchoscopy appointments at the bronchoscopy suite, Respiratory Unit, Universiti Kebangsaan Malaysia Medical Centre (UKMMC), Cheras, Kuala Lumpur, Malaysia between March and September 2006. Participants Sixty patients undergoing bronchoscopy over a 6-month period completed a questionnaire after the procedure. All patients received standard pre-medication with intravenous midazolam. Main outcome measures Bronchoscopists and patients rated the level of satisfaction of the procedure using a 10 cm visual analogue scale (VAS). Lower scores indicated better satisfaction or less discomfort. Patients and bronchoscopists also rated coughing, choking and vomiting perception using the same 10 cm VAS. Reliability analysis (intra-class correlation coefficient [ICC]) was used to analyse the correlation between patients' and bronchoscopists' VAS scores. Results All 60 patients answered the questionnaire. The median overall satisfaction scored by bronchoscopists was 2.2 (2.0) with a non-significant (p = 0.880) trend to a better median overall satisfaction of 1.9 (2.3) scored by patients. The VAS scores for cough sensation were 1.9 (2.7) and 1.5 (5.0), respectively. There was positive correlation between bronchoscopists' and patients' VAS scores for coughing sensation (p = 0.047, ICC = 0.233). No significant correlation for overall satisfaction, vomiting sensation and choking sensation was found. Conclusion Positive correlation for cough perception suggested that the bronchoscopist could reliably assess the degree of cough discomfort patients experience during bronchoscopy. PMID:21103127
Analytical procedures for determining the impacts of reliability mitigation strategies.
DOT National Transportation Integrated Search
2013-01-01
Reliability of transport, especially the ability to reach a destination within a certain amount of time, is a regular concern of travelers and shippers. The definition of reliability used in this research is how travel time varies over time. The vari...
Medication safety--reliability of preference cards.
Dawson, Anthony; Orsini, Michael J; Cooper, Mary R; Wollenburg, Karol
2005-09-01
A CLINICAL ANALYSIS of surgeons' preference cards was initiated in one hospital as part of a comprehensive analysis to reduce medication-error risks by standardizing and simplifying the intraoperative medication-use process specific to the sterile field. THE PREFERENCE CARD ANALYSIS involved two subanalyses: a review of the information as it appeared on the cards and a failure mode and effects analysis of the process involved in using and maintaining the cards. THE ANALYSIS FOUND that the preference card system in use at this hospital is outdated. Variations and inconsistencies within the preference card system indicate that the use of preference cards as guides for medication selection for surgical procedures presents an opportunity for medication errors to occur.
Gwynne, Craig R; Curran, Sarah A
2014-12-01
Clinical assessment of lower limb kinematics during dynamic tasks may identify individuals who demonstrate abnormal movement patterns that may lead to etiology of exacerbation of knee conditions such as patellofemoral joint (PFJt) pain. The purpose of this study was to determine the reliability, validity and associated measurement error of a clinically appropriate two-dimensional (2-D) procedure of quantifying frontal plane knee alignment during single limb squats. Nine female and nine male recreationally active subjects with no history of PFJt pain had frontal plane limb alignment assessed using three-dimensional (3-D) motion analysis and digital video cameras (2-D analysis) while performing single limb squats. The association between 2-D and 3-D measures was quantified using Pearson's product correlation coefficients. Intraclass correlation coefficients (ICCs) were determined for within- and between-session reliability of 2-D data and standard error of measurement (SEM) was used to establish measurement error. Frontal plane limb alignment assessed with 2-D analysis demonstrated good correlation compared with 3-D methods (r = 0.64 to 0.78, p < 0.001). Within-session (0.86) and between-session ICCs (0.74) demonstrated good reliability for 2-D measures and SEM scores ranged from 2° to 4°. 2-D measures have good consistency and may provide a valid measure of lower limb alignment when compared to existing 3-D methods. Assessment of lower limb kinematics using 2-D methods may be an accurate and clinically useful alternative to 3-D motion analysis when identifying individuals who demonstrate abnormal movement patterns associated with PFJt pain. 2b.
Spaceflight Ground Support Equipment Reliability & System Safety Data
NASA Technical Reports Server (NTRS)
Fernandez, Rene; Riddlebaugh, Jeffrey; Brinkman, John; Wilkinson, Myron
2012-01-01
Presented were Reliability Analysis, consisting primarily of Failure Modes and Effects Analysis (FMEA), and System Safety Analysis, consisting of Preliminary Hazards Analysis (PHA), performed to ensure that the CoNNeCT (Communications, Navigation, and Networking re- Configurable Testbed) Flight System was safely and reliably operated during its Assembly, Integration and Test (AI&T) phase. A tailored approach to the NASA Ground Support Equipment (GSE) standard, NASA-STD-5005C, involving the application of the appropriate Requirements, S&MA discipline expertise, and a Configuration Management system (to retain a record of the analysis and documentation) were presented. Presented were System Block Diagrams of selected GSE and the corresponding FMEA, as well as the PHAs. Also discussed are the specific examples of the FMEAs and PHAs being used during the AI&T phase to drive modifications to the GSE (via "redlining" of test procedures, and the placement of warning stickers to protect the flight hardware) before being interfaced to the Flight System. These modifications were necessary because failure modes and hazards were identified during the analysis that had not been properly mitigated. Strict Configuration Management was applied to changes (whether due to upgrades or expired calibrations) in the GSE by revisiting the FMEAs and PHAs to reflect the latest System Block Diagrams and Bill Of Material. The CoNNeCT flight system has been successfully assembled, integrated, tested, and shipped to the launch site without incident. This demonstrates that the steps taken to safeguard the flight system when it was interfaced to the various GSE were successful.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, T.E.; Hartman, M.W.; Olin, R.C.
1989-06-01
Quality-assurance procedures are contained in this comprehensive document intended to be used as an aid for wood-heater manufacturers and testing laboratories in performing particulate matter sampling of wood heaters according to EPA protocol, Method 5G. These procedures may be used in research and development, and as an aid in auditing and certification testing. A detailed, step-by-step quality assurance guide is provided to aid in the procurement and assembly of testing apparatus, to clearly describe the procedures, and to facilitate data collection and reporting. Suggested data sheets are supplied that can be used as an aid for both recordkeeping and certificationmore » applications. Throughout the document, activity matrices are provided to serve as a summary reference. Checklists are also supplied that can be used by testing personnel. Finally, for the purposes of ensuring data quality, procedures are outlined for apparatus operation, maintenance, and traceability. These procedures combined with the detailed description of the sampling and analysis protocol will help ensure the accuracy and reliability of Method 5G emission-testing results.« less
Reliability and validity of procedure-based assessments in otolaryngology training.
Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S
2015-06-01
To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P < .001). cS and pS increased from CT1 to ST8 showing construct validity (rs : +0.348 and +0.354, respectively; P < .001). The technical skill domain had the highest utilization (98% of PBAs) and was the best predictor of cS and pS (rs : +0.96 and +0.66, respectively). PBA is reliable and valid for assessing otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
76 FR 47178 - Energy Efficiency Program: Test Procedure for Lighting Systems (Luminaires)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-04
...: Test Procedure for Lighting Systems (Luminaires) AGENCY: Office of Energy Efficiency and Renewable... (``DOE'' or the ``Department'') is currently evaluating energy efficiency test procedures for luminaires... products. DOE recognizes that well-designed test procedures are important to produce reliable, repeatable...
Random analysis of bearing capacity of square footing using the LAS procedure
NASA Astrophysics Data System (ADS)
Kawa, Marek; Puła, Wojciech; Suska, Michał
2016-09-01
In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.
Meirte, J; Moortgat, P; Truijen, S; Maertens, K; Lafaire, C; De Cuyper, L; Hubens, G; Van Daele, U
2015-09-01
Burn scars are frequently accompanied with sensory deficits often remaining present months or even years after injury. Clinimetric properties of assessment tools remain understudied within burn literature. Tactile sense of touch can be examined with the touch pressure threshold (TPT) method using the Semmes Weinstein monofilament test (SWMT). There is in recent research no consensus on the exact measurement procedure when using the SWMT. The aim of this paper was to determine the interrater and intrarater reliability of TPT within burn scars and healthy controls using the 'ascending descending' measurement procedure. We used the newly developed guidelines for reporting reliability and agreement studies (GRRAS) as a basis to report this reliability study. In total 36 individuals were tested; a healthy control group and a scar group. The interrater reliability was excellent in the scar group (ICC=0.908/SEM=0.21) and fair to good in the control group (ICC=0.731/SEM=0.12). In the scar group intrarater ICC value was excellent (ICC=0.822/SEM=0.33). Within the control group also an excellent intrarater reliability (ICC=0.807/SEM=0.27) was found. In conclusion this study shows that the SWMT with the 'ascending descending' measurement procedure is a feasible and reliable objective measure to evaluate TPT in (older) upper extremities burn scars as well as in healthy skin. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
Gunaydin, Gurkan; Citaker, Seyit; Meray, Jale; Cobanoglu, Gamze; Gunaydin, Ozge Ece; Hazar Kanik, Zeynep
2016-11-01
Validation of a self-report questionnaire. The purpose of this study was to investigate adaptation, validity, and reliability of the Turkish version of the Bournemouth Questionnaire. Low back pain is one of the most frequent disorders leading to activity limitation. This pain affects most of people in their lives. The most important point to evaluate patient's functional abilities and to decide a successful therapy procedure is to manage the assessment questionnaires precisely. One hundred ten patients with chronic low back pain were included in present study. To assess reliability, test-retest and internal consistency analyses were applied. The results of test-retest analysis were assessed by using Intraclass Correlation Coefficient method (95% confidence interval). For internal consistency, Cronbach alpha value was calculated. Validity of the questionnaire was assessed in terms of construct validity. For construct validity, factor analysis and convergent validity were tested. For convergent validity, total points of the Bournemouth Questionnaire were assessed with the total points of Quebec Back Pain Disability Scale and Roland Morris Disability Questionnaire by using Pearson correlation coefficient analysis. Cronbach alpha value was found 0.914, showing that this questionnaire has high internal consistency. The results of test-retest analysis were varying between 0.851 and 0.927, which shows that test-retest results are highly correlated. Factor analysis test indicated that this questionnaire had one factor. Pearson correlation coefficient of the Bournemouth Questionnaire with Roland Morris Disability Questionnaire was calculated 0.703 and it was found with Quebec Back Pain Disability Scale is 0.659. These results showed that the Bournemouth Questionnaire is very good correlated with Roland Morris Disability Questionnaire and Quebec Back Pain Disability Scale. The Turkish version of the Bournemouth Questionnaire is valid and reliable. 3.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-01
...; Order No. 766] Delegation of Authority Regarding Electric Reliability Organization's Budget, Delegation... Electric Reliability Organization (ERO) filings. In particular, this Final Rule transfers delegated... delegation agreements, and ERO policies and procedures. DATES: This rule is effective October 1, 2012. FOR...
Portfolio Assessment: Increasing Reliability and Validity.
ERIC Educational Resources Information Center
Griffee, Dale
2002-01-01
Addresses the traditional understanding of reliability as it pertains to writing portfolio assessments. Offers a list of practical actions that can be taken to increase assessment reliability, including explicit definitions of what a portfolio holds, rater training, rater burnout, and consistent rating procedures. (Contains 26 references.) (NB)
10 CFR 712.12 - HRP implementation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...) Report any observed or reported behavior or condition of another HRP-certified individual that could indicate a reliability concern, including those behaviors and conditions listed in § 712.13(c), to a...
One Iota Fills the Quota: A Paradox in Multifacet Reliability Coefficients.
ERIC Educational Resources Information Center
Conger, Anthony J.
1983-01-01
A paradoxical phenomenon of decreases in reliability as the number of elements averaged over increases is shown to be possible in multifacet reliability procedures (intraclass correlations or generalizability coefficients). Conditions governing this phenomenon are presented along with implications and cautions. (Author)
Tavakol, Mohsen; Dennick, Reg
2012-01-01
As great emphasis is rightly placed upon the importance of assessment to judge the quality of our future healthcare professionals, it is appropriate not only to choose the most appropriate assessment method, but to continually monitor the quality of the tests themselves, in a hope that we may continually improve the process. This article stresses the importance of quality control mechanisms in the exam cycle and briefly outlines some of the key psychometric concepts including reliability measures, factor analysis, generalisability theory and item response theory. The importance of such analyses for the standard setting procedures is emphasised. This article also accompanies two new AMEE Guides in Medical Education (Tavakol M, Dennick R. Post-examination Analysis of Objective Tests: AMEE Guide No. 54 and Tavakol M, Dennick R. 2012. Post examination analysis of objective test data: Monitoring and improving the quality of high stakes examinations: AMEE Guide No. 66) which provide the reader with practical examples of analysis and interpretation, in order to help develop valid and reliable tests.
FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.
Gu, Ming; Chakrabartty, Shantanu
2013-08-01
This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).
Gillespie, Alex; Reader, Tom W
2016-01-01
Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496
Using the arthroscopic surgery skill evaluation tool as a pass-fail examination.
Koehler, Ryan J; Nicandri, Gregg T
2013-12-04
Examination of arthroscopic skill requires evaluation tools that are valid and reliable with clear criteria for passing. The Arthroscopic Surgery Skill Evaluation Tool was developed as a video-based assessment of technical skill with criteria for passing established by a panel of experts. The purpose of this study was to test the validity and reliability of the Arthroscopic Surgery Skill Evaluation Tool as a pass-fail examination of arthroscopic skill. Twenty-eight residents and two sports medicine faculty members were recorded performing diagnostic knee arthroscopy on a left and right cadaveric specimen in our arthroscopic skills laboratory. Procedure videos were evaluated with use of the Arthroscopic Surgery Skill Evaluation Tool by two raters blind to subject identity. Subjects were considered to pass the Arthroscopic Surgery Skill Evaluation Tool when they attained scores of ≥ 3 on all eight assessment domains. The raters agreed on a pass-fail rating for fifty-five of sixty videos rated with an interclass correlation coefficient value of 0.83. Ten of thirty participants were assigned passing scores by both raters for both diagnostic arthroscopies performed in the laboratory. Receiver operating characteristic analysis demonstrated that logging more than eighty arthroscopic cases or performing more than thirty-five arthroscopic knee cases was predictive of attaining a passing Arthroscopic Surgery Skill Evaluation Tool score on both procedures performed in the laboratory. The Arthroscopic Surgery Skill Evaluation Tool is valid and reliable as a pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. This study demonstrates that the Arthroscopic Surgery Skill Evaluation Tool may be a useful tool for pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. Further study is necessary to determine whether the Arthroscopic Surgery Skill Evaluation Tool can be used for the assessment of multiple arthroscopic procedures and whether it can be used to evaluate arthroscopic procedures performed in the operating room.
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.
2017-01-01
The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…
Evans, Heather L; O'Shea, Dylan J; Morris, Amy E; Keys, Kari A; Wright, Andrew S; Schaad, Douglas C; Ilgen, Jonathan S
2016-02-01
This pilot study assessed the feasibility of using first person (1P) video recording with Google Glass (GG) to assess procedural skills, as compared with traditional third person (3P) video. We hypothesized that raters reviewing 1P videos would visualize more procedural steps with greater inter-rater reliability than 3P rating vantages. Seven subjects performed simulated internal jugular catheter insertions. Procedures were recorded by both Google Glass and an observer's head-mounted camera. Videos were assessed by 3 expert raters using a task-specific checklist (CL) and both an additive- and summative-global rating scale (GRS). Mean scores were compared by t-tests. Inter-rater reliabilities were calculated using intraclass correlation coefficients. The 1P vantage was associated with a significantly higher mean CL score than the 3P vantage (7.9 vs 6.9, P = .02). Mean GRS scores were not significantly different. Mean inter-rater reliabilities for the CL, additive-GRS, and summative-GRS were similar between vantages. 1P vantage recordings may improve visualization of tasks for behaviorally anchored instruments (eg, CLs), whereas maintaining similar global ratings and inter-rater reliability when compared with conventional 3P vantage recordings. Copyright © 2016 Elsevier Inc. All rights reserved.
Computation of Steady and Unsteady Laminar Flames: Theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai
1999-01-01
In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.
Scale Reliability Evaluation with Heterogeneous Populations
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…
Change in quality of malnutrition surveys between 1986 and 2015.
Grellety, Emmanuel; Golden, Michael H
2018-01-01
Representative surveys collecting weight, height and MUAC are used to estimate the prevalence of acute malnutrition. The results are then used to assess the scale of malnutrition in a population and type of nutritional intervention required. There have been changes in methodology over recent decades; the objective of this study was to determine if these have resulted in higher quality surveys. In order to examine the change in reliability of such surveys we have analysed the statistical distributions of the derived anthropometric parameters from 1843 surveys conducted by 19 agencies between 1986 and 2015. With the introduction of standardised guidelines and software by 2003 and their more general application from 2007 the mean standard deviation, kurtosis and skewness of the parameters used to assess nutritional status have each moved to now approximate the distribution of the WHO standards when the exclusion of outliers from analysis is based upon SMART flagging procedure. Where WHO flags, that only exclude data incompatible with life, are used the quality of anthropometric surveys has improved and the results now approach those seen with SMART flags and the WHO standards distribution. Agencies vary in their uptake and adherence to standard guidelines. Those agencies that fully implement the guidelines achieve the most consistently reliable results. Standard methods should be universally used to produce reliable data and tests of data quality and SMART type flagging procedures should be applied and reported to ensure that the data are credible and therefore inform appropriate intervention. Use of SMART guidelines has coincided with reliable anthropometric data since 2007.
Zhao, Zhiyong; Liu, Na; Yang, Lingchen; Deng, Yifeng; Wang, Jianhua; Song, Suquan; Lin, Shanhai; Wu, Aibo; Zhou, Zhenlei; Hou, Jiafa
2015-09-01
Mycotoxins have the potential to enter the human food chain through carry-over of contaminants from feed into animal-derived products. The objective of the study was to develop a reliable and sensitive method for the analysis of 30 mycotoxins in animal feed and animal-derived food (meat, edible animal tissues, and milk) using liquid chromatography-tandem mass spectrometry (LC-MS/MS). In the study, three extraction procedures, as well as various cleanup procedures, were evaluated to select the most suitable sample preparation procedure for different sample matrices. In addition, timed and highly selective reaction monitoring on LC-MS/MS was used to filter out isobaric matrix interferences. The performance characteristics (linearity, sensitivity, recovery, precision, and specificity) of the method were determined according to Commission Decision 2002/657/EC and 401/2006/EC. The established method was successfully applied to screening of mycotoxins in animal feed and animal-derived food. The results indicated that mycotoxin contamination in feed directly influenced the presence of mycotoxin in animal-derived food. Graphical abstract Multi-mycotoxin analysis of animal feed and animal-derived food using LC-MS/MS.
Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Richard J.
2003-01-01
Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.
Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco
NASA Astrophysics Data System (ADS)
Bounoua, Z.; Mechaqrane, A.
2018-05-01
An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.
Telescience testbed: operational support functions for biomedical experiments.
Yamashita, M; Watanabe, S; Shoji, T; Clarke, A H; Suzuki, H; Yanagihara, D
1992-07-01
A telescience testbed was conducted to study the methodology of space biomedicine with simulated constraints imposed on space experiments. An experimental subject selected for this testbedding was an elaborate surgery of animals and electrophysiological measurements conducted by an operator onboard. The standing potential in the ampulla of the pigeon's semicircular canal was measured during gravitational and caloric stimulation. A principal investigator, isolated from the operation site, participated in the experiment interactively by telecommunication links. Reliability analysis was applied to the whole layers of experimentation, including design of experimental objectives and operational procedures. Engineering and technological aspects of telescience are discussed in terms of reliability to assure quality of science. Feasibility of robotics was examined for supportive functions to reduce the workload of the onboard operator.
Automatic detection of sleep macrostructure based on a sensorized T-shirt.
Bianchi, Anna M; Mendez, Martin O
2010-01-01
In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.
Computational methods for structural load and resistance modeling
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Millwater, H. R.; Harren, S. V.
1991-01-01
An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.
Proximal tibial osteotomy. A survivorship analysis.
Ritter, M A; Fechtman, R A
1988-01-01
Proximal tibial osteotomy is generally accepted as a treatment for the patient with unicompartmental arthritis. However, a few reports of the long-term results of this procedure are available in the literature, and none have used the technique known as survivorship analysis. This technique has an advantage over conventional analysis because it does not exclude patients for inadequate follow-up, loss to follow-up, or patient death. In this study, survivorship analysis was applied to 78 proximal tibial osteotomies, performed exclusively by the senior author for the correction of a preoperative varus deformity, and a survival curve was constructed. It was concluded that the reliable longevity of the proximal tibial osteotomy is approximately 6 years.
Validation of a Dry Model for Assessing the Performance of Arthroscopic Hip Labral Repair.
Phillips, Lisa; Cheung, Jeffrey J H; Whelan, Daniel B; Murnaghan, Michael Lucas; Chahal, Jas; Theodoropoulos, John; Ogilvie-Harris, Darrell; Macniven, Ian; Dwyer, Tim
2017-07-01
Arthroscopic hip labral repair is a technically challenging and demanding surgical technique with a steep learning curve. Arthroscopic simulation allows trainees to develop these skills in a safe environment. The purpose of this study was to evaluate the use of a combination of assessment ratings for the performance of arthroscopic hip labral repair on a dry model. Cross-sectional study; Level of evidence, 3. A total of 47 participants including orthopaedic surgery residents (n = 37), sports medicine fellows (n = 5), and staff surgeons (n = 5) performed arthroscopic hip labral repair on a dry model. Prior arthroscopic experience was noted. Participants were evaluated by 2 orthopaedic surgeons using a task-specific checklist, the Arthroscopic Surgical Skill Evaluation Tool (ASSET), task completion time, and a final global rating scale. All procedures were video-recorded and scored by an orthopaedic fellow blinded to the level of training of each participant. The internal consistency/reliability (Cronbach alpha) using the total ASSET score for the procedure was high (intraclass correlation coefficient > 0.9). One-way analysis of variance for the total ASSET score demonstrated a difference between participants based on the level of training ( F 3,43 = 27.8, P < .001). A good correlation was seen between the ASSET score and previous exposure to arthroscopic procedures ( r = 0.52-0.73, P < .001). The interrater reliability for the ASSET score was excellent (>0.9). The results of this study demonstrate that the use of dry models to assess the performance of arthroscopic hip labral repair by trainees is both valid and reliable. Further research will be required to demonstrate a correlation with performance on cadaveric specimens or in the operating room.
A content validated questionnaire for assessment of self reported venous blood sampling practices
2012-01-01
Background Venous blood sampling is a common procedure in health care. It is strictly regulated by national and international guidelines. Deviations from guidelines due to human mistakes can cause patient harm. Validated questionnaires for health care personnel can be used to assess preventable "near misses"--i.e. potential errors and nonconformities during venous blood sampling practices that could transform into adverse events. However, no validated questionnaire that assesses nonconformities in venous blood sampling has previously been presented. The aim was to test a recently developed questionnaire in self reported venous blood sampling practices for validity and reliability. Findings We developed a questionnaire to assess deviations from best practices during venous blood sampling. The questionnaire contained questions about patient identification, test request management, test tube labeling, test tube handling, information search procedures and frequencies of error reporting. For content validity, the questionnaire was confirmed by experts on questionnaires and venous blood sampling. For reliability, test-retest statistics were used on the questionnaire answered twice. The final venous blood sampling questionnaire included 19 questions out of which 9 had in total 34 underlying items. It was found to have content validity. The test-retest analysis demonstrated that the items were generally stable. In total, 82% of the items fulfilled the reliability acceptance criteria. Conclusions The questionnaire could be used for assessment of "near miss" practices that could jeopardize patient safety and gives several benefits instead of assessing rare adverse events only. The higher frequencies of "near miss" practices allows for quantitative analysis of the effect of corrective interventions and to benchmark preanalytical quality not only at the laboratory/hospital level but also at the health care unit/hospital ward. PMID:22260505
A content validated questionnaire for assessment of self reported venous blood sampling practices.
Bölenius, Karin; Brulin, Christine; Grankvist, Kjell; Lindkvist, Marie; Söderberg, Johan
2012-01-19
Venous blood sampling is a common procedure in health care. It is strictly regulated by national and international guidelines. Deviations from guidelines due to human mistakes can cause patient harm. Validated questionnaires for health care personnel can be used to assess preventable "near misses"--i.e. potential errors and nonconformities during venous blood sampling practices that could transform into adverse events. However, no validated questionnaire that assesses nonconformities in venous blood sampling has previously been presented. The aim was to test a recently developed questionnaire in self reported venous blood sampling practices for validity and reliability. We developed a questionnaire to assess deviations from best practices during venous blood sampling. The questionnaire contained questions about patient identification, test request management, test tube labeling, test tube handling, information search procedures and frequencies of error reporting. For content validity, the questionnaire was confirmed by experts on questionnaires and venous blood sampling. For reliability, test-retest statistics were used on the questionnaire answered twice. The final venous blood sampling questionnaire included 19 questions out of which 9 had in total 34 underlying items. It was found to have content validity. The test-retest analysis demonstrated that the items were generally stable. In total, 82% of the items fulfilled the reliability acceptance criteria. The questionnaire could be used for assessment of "near miss" practices that could jeopardize patient safety and gives several benefits instead of assessing rare adverse events only. The higher frequencies of "near miss" practices allows for quantitative analysis of the effect of corrective interventions and to benchmark preanalytical quality not only at the laboratory/hospital level but also at the health care unit/hospital ward.
Performance and Reliability of Bonded Interfaces for High-Temperature Packaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVoto, Douglas
2016-06-08
This is a technical review of the DOE VTO EDT project EDT063, Performance and Reliability of Bonded Interfaces for High-Temperature Packaging. A procedure for analyzing the reliability of sintered-silver through experimental thermal cycling and crack propagation modeling has been outlined and results have been presented.
16 CFR 260.5 - Interpretation and substantiation of environmental marketing claims.
Code of Federal Regulations, 2011 CFR
2011-01-01
... reasonable basis substantiating the claim. A reasonable basis consists of competent and reliable evidence. In... reliable scientific evidence, defined as tests, analyses, research, studies or other evidence based on the... qualified to do so, using procedures generally accepted in the profession to yield accurate and reliable...
Gelau, Christhard; Henning, Matthias J; Krems, Josef F
2009-03-01
In recent years considerable efforts have been spent on the development of the occlusion technique as a procedure for the assessment of the human-machine interface of in-vehicle information and communication systems (IVIS) designed to be used by the driver while driving. The importance and significance of the findings resulting from the application of this procedure depends essentially on its reliability. Because there is a lack of evidence as to whether this basic criterion of measurement is met with this procedure, and because questionable reliability can lead to doubts about their validity, our project strove to clarify this issue. This paper reports on a statistical reanalysis of data obtained from previous experiments. To summarise, the characteristic values found for internal consistency were almost all in the range of .90 for the occlusion technique, which can be considered satisfactory.
NASA Astrophysics Data System (ADS)
Walaszek, Damian; Senn, Marianne; Wichser, Adrian; Faller, Markus; Wagner, Barbara; Bulska, Ewa; Ulrich, Andrea
2014-09-01
This work describes an evaluation of a strategy for multi-elemental analysis of typical ancient bronzes (copper, lead bronze and tin bronze) by means of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS).The samples originating from archeological experiments on ancient metal smelting processes using direct reduction in a ‘bloomery’ furnace as well as historical casting techniques were investigated with the use of the previously proposed analytical procedure, including metallurgical observation and preliminary visual estimation of the homogeneity of the samples. The results of LA-ICPMS analysis were compared to the results of bulk composition obtained by X-ray fluorescence spectrometry (XRF) and by inductively coupled plasma mass spectrometry (ICPMS) after acid digestion. These results were coherent for most of the elements confirming the usefulness of the proposed analytical procedure, however the reliability of the quantitative information about the content of the most heterogeneously distributed elements was also discussed in more detail.
Mondello, Luigi; Casilli, Alessandro; Tranchida, Peter Quinto; Lo Presti, Maria; Dugo, Paola; Dugo, Giovanni
2007-11-01
The present research is focused on the development of a comprehensive two-dimensional gas chromatography-rapid scanning quadrupole mass spectrometric (GC x GC-qMS) methodology for the analysis of trace-amount pesticides contained in a complex real-world sample. Reliable peak assignment was carried out by using a recently developed, dedicated pesticide MS library (for comprehensive GC analysis), characterized by a twin-filter search procedure, the first based on a minimum degree of spectral similarity and the second on the interactive use of linear retention indices (LRI). The library was constructed by subjecting mixtures of commonly used pesticides to GC x GC-qMS analysis and then deriving their pure mass spectra and LRI values. In order to verify the effectiveness of the approach, a pesticide-contaminated red grapefruit extract was analysed. The certainty of peak assignment was attained by exploiting both the enhanced separation power of dual-oven GC x GC and the highly effective search procedure.
Normative Data for an Instrumental Assessment of the Upper-Limb Functionality.
Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco
2015-01-01
Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment.
Normative Data for an Instrumental Assessment of the Upper-Limb Functionality
Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco
2015-01-01
Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment. PMID:26539500
The Arthroscopic Surgical Skill Evaluation Tool (ASSET).
Koehler, Ryan J; Amsdell, Simon; Arendt, Elizabeth A; Bisson, Leslie J; Braman, Jonathan P; Bramen, Jonathan P; Butler, Aaron; Cosgarea, Andrew J; Harner, Christopher D; Garrett, William E; Olson, Tyson; Warme, Winston J; Nicandri, Gregg T
2013-06-01
Surgeries employing arthroscopic techniques are among the most commonly performed in orthopaedic clinical practice; however, valid and reliable methods of assessing the arthroscopic skill of orthopaedic surgeons are lacking. The Arthroscopic Surgery Skill Evaluation Tool (ASSET) will demonstrate content validity, concurrent criterion-oriented validity, and reliability when used to assess the technical ability of surgeons performing diagnostic knee arthroscopic surgery on cadaveric specimens. Cross-sectional study; Level of evidence, 3. Content validity was determined by a group of 7 experts using the Delphi method. Intra-articular performance of a right and left diagnostic knee arthroscopic procedure was recorded for 28 residents and 2 sports medicine fellowship-trained attending surgeons. Surgeon performance was assessed by 2 blinded raters using the ASSET. Concurrent criterion-oriented validity, interrater reliability, and test-retest reliability were evaluated. Content validity: The content development group identified 8 arthroscopic skill domains to evaluate using the ASSET. Concurrent criterion-oriented validity: Significant differences in the total ASSET score (P < .05) between novice, intermediate, and advanced experience groups were identified. Interrater reliability: The ASSET scores assigned by each rater were strongly correlated (r = 0.91, P < .01), and the intraclass correlation coefficient between raters for the total ASSET score was 0.90. Test-retest reliability: There was a significant correlation between ASSET scores for both procedures attempted by each surgeon (r = 0.79, P < .01). The ASSET appears to be a useful, valid, and reliable method for assessing surgeon performance of diagnostic knee arthroscopic surgery in cadaveric specimens. Studies are ongoing to determine its generalizability to other procedures as well as to the live operating room and other simulated environments.
A Procedure for 3-D Contact Stress Analysis of Spiral Bevel Gears
NASA Technical Reports Server (NTRS)
Kumar, A.; Bibel, G.
1994-01-01
Contact stress distribution of spiral bevel gears using nonlinear finite element static analysis is presented. Procedures have been developed to solve the nonlinear equations that identify the gear and pinion surface coordinates based on the kinematics of the cutting process and orientate the pinion and the gear in space to mesh with each other. Contact is simulated by connecting GAP elements along the intersection of a line from each pinion point (parallel to the normal at the contact point) with the gear surface. A three dimensional model with four gear teeth and three pinion teeth is used to determine the contact stresses at two different contact positions in a spiral bevel gearset. A summary of the elliptical contact stress distribution is given. This information will be helpful to helicopter and aircraft transmission designers who need to minimize weight of the transmission and maximize reliability.
Vitse, J; Bekara, F; Bertheuil, N; Sinna, R; Chaput, B; Herlin, C
2017-02-01
Current data on upper extremity propeller flaps are poor and do not allow the assessment of the safety of this technique. A systematic literature review was conducted searching PubMed, EMBASE, and the Cochrane Library electronic databases, and the selection process was adapted from the preferred reporting items for systematic reviews and meta-analysis statement. The final analysis included ten relevant articles involving 117 flaps. The majority of flaps were used for the hand, distal wrist, and elbow. The radial artery perforator and ulnar artery perforator were the most frequently used flaps. The were 7% flaps with venous congestion and 3% with complete necrosis. No difference in complications rate was found for different flaps sites. Perforator-based propeller flaps appear to be an interesting procedure for covering soft tissue defects involving the upper extremities, even for large defects, but the procedure requires experience and close monitoring. II.
Ortenzi, Monica; Ghiselli, Roberto; Baldarelli, Maddalena; Cardinali, Luca; Guerrieri, Mario
2018-04-01
The latest robotic bipolar vessel sealing tools have been described to be effective allowing to perform procedures with reduced blood loss and shorter operative times. The aim of this study was to assess the efficacy and reliability of these devices applied in different robotic procedures. All robotic operations, between 2014 and 2016, were performed using the EndoWrist One VesselSealer (EWO, Intuitive Surgical, Sunnyvale, CA), a bipolar fully wristed device. Data, including age, gender, body mass index (BMI), were collected. Robot docking time, intraoperative blood loss, robot malfunctioning and overall operative time were analyzed. A meta-analysis of the literature was carried out to point the attention to three different parameters (mean blood loss, operating time and hospital stay) trying to identify how different coagulation devices may affect them. In 73 robotic procedures, the mean operative time was 118.2 minutes (75-125 minutes). Mean hospital stay was four days (2-10 days). There were two post-operative complications (2.74%). The bipolar vessel sealer offers the efficacy of bipolar diathermy and the advantages of a fully wristed instrument. It does not require any change of instruments for coagulation or involvement of the bedside assistant surgeon. These characteristics lead to a reduction in operative time.
Utrera, Mariana; Morcuende, David; Rodríguez-Carpena, Javier-Germán; Estévez, Mario
2011-12-01
Precise methodologies for the routine analysis of particular protein carbonyls are required in order to progress in this topic of increasing interest. The present paper originally describes the application of an improved method for the detection of α-aminoadipic and γ-glutamic semialdehydes in a meat system by using a derivatization procedure with p-amino-benzoic acid (ABA) followed by fluorescent high-performance liquid chromatography (HPLC). The method development comprises i) the description of a simple HPLC program which allows the efficient separation of the ABA and the key standard compounds and ii) the optimization of the procedure for the preparation of a meat sample in order to maximize the fluorescent signal for both protein carbonyls. Furthermore, the suitability of this method is evaluated by applying the technique to porcine burger patties. The present procedure enables an accurate and relatively fast analysis of both semialdehydes in meat samples in which they could play an interesting role as reliable indicators of protein oxidation. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
Quantitative architectural analysis: a new approach to cortical mapping.
Schleicher, A; Palomero-Gallagher, N; Morosan, P; Eickhoff, S B; Kowalski, T; de Vos, K; Amunts, K; Zilles, K
2005-12-01
Recent progress in anatomical and functional MRI has revived the demand for a reliable, topographic map of the human cerebral cortex. Till date, interpretations of specific activations found in functional imaging studies and their topographical analysis in a spatial reference system are, often, still based on classical architectonic maps. The most commonly used reference atlas is that of Brodmann and his successors, despite its severe inherent drawbacks. One obvious weakness in traditional, architectural mapping is the subjective nature of localising borders between cortical areas, by means of a purely visual, microscopical examination of histological specimens. To overcome this limitation, more objective, quantitative mapping procedures have been established in the past years. The quantification of the neocortical, laminar pattern by defining intensity line profiles across the cortical layers, has a long tradition. During the last years, this method has been extended to enable a reliable, reproducible mapping of the cortex based on image analysis and multivariate statistics. Methodological approaches to such algorithm-based, cortical mapping were published for various architectural modalities. In our contribution, principles of algorithm-based mapping are described for cyto- and receptorarchitecture. In a cytoarchitectural parcellation of the human auditory cortex, using a sliding window procedure, the classical areal pattern of the human superior temporal gyrus was modified by a replacing of Brodmann's areas 41, 42, 22 and parts of area 21, with a novel, more detailed map. An extension and optimisation of the sliding window procedure to the specific requirements of receptorarchitectonic mapping, is also described using the macaque central sulcus and adjacent superior parietal lobule as a second, biologically independent example. Algorithm-based mapping procedures, however, are not limited to these two architectural modalities, but can be applied to all images in which a laminar cortical pattern can be detected and quantified, e.g. myeloarchitectonic and in vivo high resolution MR imaging. Defining cortical borders, based on changes in cortical lamination in high resolution, in vivo structural MR images will result in a rapid increase of our knowledge on the structural parcellation of the human cerebral cortex.
Analysis Of Rearfoot Motion In Running Shoes
NASA Astrophysics Data System (ADS)
Cooper, Les
1986-12-01
In order to produce better shoes that cushion athletes from the high impact forces of running and still provide stability to the foot it is essential to have a method of quickly and reliably evaluating the performance of prototype shoes. The analysis of rear-foot motion requires the use of film or video recordings of test subjects running on a treadmill. Specific points on the subject are tracked to give a measure of inversion or eversion of the heel. This paper describes the testing procedure and its application to running shoe design. A comparison of film and video systems is also discussed.
Performance of a Lexical and POS Tagger for Sanskrit
NASA Astrophysics Data System (ADS)
Hellwig, Oliver
Due to the phonetic, morphological, and lexical complexity of Sanskrit, the automatic analysis of this language is a real challenge in the area of natural language processing. The paper describes a series of tests that were performed to assess the accuracy of the tagging program SanskritTagger. To our knowlegde, it offers the first reliable benchmark data for evaluating the quality of taggers for Sanskrit using an unrestricted dictionary and texts from different domains. Based on a detailed analysis of the test results, the paper points out possible directions for future improvements of statistical tagging procedures for Sanskrit.
Williams, Mark R; McKeown, Andrew; Dexter, Franklin; Miner, James R; Sessler, Daniel I; Vargo, John; Turk, Dennis C; Dworkin, Robert H
2016-01-01
Successful procedural sedation represents a spectrum of patient- and clinician-related goals. The absence of a gold-standard measure of the efficacy of procedural sedation has led to a variety of outcomes being used in clinical trials, with the consequent lack of consistency among measures, making comparisons among trials and meta-analyses challenging. We evaluated which existing measures have undergone psychometric analysis in a procedural sedation setting and whether the validity of any of these measures support their use across the range of procedures for which sedation is indicated. Numerous measures were found to have been used in clinical research on procedural sedation across a wide range of procedures. However, reliability and validity have been evaluated for only a limited number of sedation scales, observer-rated pain/discomfort scales, and satisfaction measures in only a few categories of procedures. Typically, studies only examined 1 or 2 aspects of scale validity. The results are likely unique to the specific clinical settings they were tested in. Certain scales, for example, those requiring motor stimulation, are unsuitable to evaluate sedation for procedures where movement is prohibited (e.g., magnetic resonance imaging scans). Further work is required to evaluate existing measures for procedures for which they were not developed. Depending on the outcomes of these efforts, it might ultimately be necessary to consider measures of sedation efficacy to be procedure specific.
NASA Astrophysics Data System (ADS)
Kahveci, Ajda
2010-07-01
In this study, multiple thematically based and quantitative analysis procedures were utilized to explore the effectiveness of Turkish chemistry and science textbooks in terms of their reflection of reform. The themes gender equity, questioning level, science vocabulary load, and readability level provided the conceptual framework for the analyses. An unobtrusive research method, content analysis, was used by coding the manifest content and counting the frequency of words, photographs, drawings, and questions by cognitive level. The context was an undergraduate chemistry teacher preparation program at a large public university in a metropolitan area in northwestern Turkey. Forty preservice chemistry teachers were guided to analyze 10 middle school science and 10 high school chemistry textbooks. Overall, the textbooks included unfair gender representations, a considerably higher number of input and processing than output level questions, and high load of science terminology. The textbooks failed to provide sufficient empirical evidence to be considered as gender equitable and inquiry-based. The quantitative approach employed for evaluation contrasts with a more interpretive approach, and has the potential in depicting textbook profiles in a more reliable way, complementing the commonly employed qualitative procedures. Implications suggest that further work in this line is needed on calibrating the analysis procedures with science textbooks used in different international settings. The procedures could be modified and improved to meet specific evaluation needs. In the Turkish context, next step research may concern the analysis of science textbooks being rewritten for the reform-based curricula to make cross-comparisons and evaluate a possible progression.
Development of Officer Selection Battery Forms 3 and 4
1986-03-01
the development, standardization, and validation of two parallel forms of a test to be used for assessing young men and women applying to ROTC. Fairly...appropriate di6ffculty, high reliability, and state-of-the-art validity and fairness for mit~orities and women . EDGAR M. JOHNSON Technical Directcr 4v 4...administrable, test for use in assessing young men and women applying to Advanced Army ROTC. Procedur .-: Earlier research had performed an analysis of the
NASA Technical Reports Server (NTRS)
1982-01-01
The integrated application of active controls (IAAC) technology to an advanced subsonic transport is reported. Supplementary technical data on the following topics are included: (1) 1990's avionics technology assessment; (2) function criticality assessment; (3) flight deck system for total control and functional features list; (4) criticality and reliability assessment of units; (5) crew procedural function task analysis; and (6) recommendations for simulation mechanization.
NASA Technical Reports Server (NTRS)
Leveson, Nancy
1987-01-01
Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.
Processes and Procedures for Estimating Score Reliability and Precision
ERIC Educational Resources Information Center
Bardhoshi, Gerta; Erford, Bradley T.
2017-01-01
Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVoto, Douglas
2015-06-10
This is a technical review of the DOE VTO EDT project EDT063, Performance and Reliability of Bonded Interfaces for High-Temperature Packaging. A procedure for analyzing the reliability of sintered-silver through experimental thermal cycling and crack propagation modeling has been outlined and results have been presented.
How to Measure the Onset of Babbling Reliably?
ERIC Educational Resources Information Center
Molemans, Inge; van den Berg, Renate; van Severen, Lieve; Gillis, Steven
2012-01-01
Various measures for identifying the onset of babbling have been proposed in the literature, but a formal definition of the exact procedure and a thorough validation of the sample size required for reliably establishing babbling onset is lacking. In this paper the reliability of five commonly used measures is assessed using a large longitudinal…
Approximation of reliabilities for multiple-trait model with maternal effects.
Strabel, T; Misztal, I; Bertrand, J K
2001-04-01
Reliabilities for a multiple-trait maternal model were obtained by combining reliabilities obtained from single-trait models. Single-trait reliabilities were obtained using an approximation that supported models with additive and permanent environmental effects. For the direct effect, the maternal and permanent environmental variances were assigned to the residual. For the maternal effect, variance of the direct effect was assigned to the residual. Data included 10,550 birth weight, 11,819 weaning weight, and 3,617 postweaning gain records of Senepol cattle. Reliabilities were obtained by generalized inversion and by using single-trait and multiple-trait approximation methods. Some reliabilities obtained by inversion were negative because inbreeding was ignored in calculating the inverse of the relationship matrix. The multiple-trait approximation method reduced the bias of approximation when compared with the single-trait method. The correlations between reliabilities obtained by inversion and by multiple-trait procedures for the direct effect were 0.85 for birth weight, 0.94 for weaning weight, and 0.96 for postweaning gain. Correlations for maternal effects for birth weight and weaning weight were 0.96 to 0.98 for both approximations. Further improvements can be achieved by refining the single-trait procedures.
Mandillo, Silvia; Tucci, Valter; Hölter, Sabine M.; Meziane, Hamid; Banchaabouchi, Mumna Al; Kallnik, Magdalena; Lad, Heena V.; Nolan, Patrick M.; Ouagazzal, Abdel-Mouttalib; Coghill, Emma L.; Gale, Karin; Golini, Elisabetta; Jacquot, Sylvie; Krezel, Wojtek; Parker, Andy; Riet, Fabrice; Schneider, Ilka; Marazziti, Daniela; Auwerx, Johan; Brown, Steve D. M.; Chambon, Pierre; Rosenthal, Nadia; Tocchini-Valentini, Glauco; Wurst, Wolfgang
2008-01-01
Establishing standard operating procedures (SOPs) as tools for the analysis of behavioral phenotypes is fundamental to mouse functional genomics. It is essential that the tests designed provide reliable measures of the process under investigation but most importantly that these are reproducible across both time and laboratories. For this reason, we devised and tested a set of SOPs to investigate mouse behavior. Five research centers were involved across France, Germany, Italy, and the UK in this study, as part of the EUMORPHIA program. All the procedures underwent a cross-validation experimental study to investigate the robustness of the designed protocols. Four inbred reference strains (C57BL/6J, C3HeB/FeJ, BALB/cByJ, 129S2/SvPas), reflecting their use as common background strains in mutagenesis programs, were analyzed to validate these tests. We demonstrate that the operating procedures employed, which includes open field, SHIRPA, grip-strength, rotarod, Y-maze, prepulse inhibition of acoustic startle response, and tail flick tests, generated reproducible results between laboratories for a number of the test output parameters. However, we also identified several uncontrolled variables that constitute confounding factors in behavioral phenotyping. The EUMORPHIA SOPs described here are an important start-point for the ongoing development of increasingly robust phenotyping platforms and their application in large-scale, multicentre mouse phenotyping programs. PMID:18505770
Solving the problem of comparing whole bacterial genomes across different sequencing platforms.
Kaas, Rolf S; Leekitcharoenphon, Pimlapas; Aarestrup, Frank M; Lund, Ole
2014-01-01
Whole genome sequencing (WGS) shows great potential for real-time monitoring and identification of infectious disease outbreaks. However, rapid and reliable comparison of data generated in multiple laboratories and using multiple technologies is essential. So far studies have focused on using one technology because each technology has a systematic bias making integration of data generated from different platforms difficult. We developed two different procedures for identifying variable sites and inferring phylogenies in WGS data across multiple platforms. The methods were evaluated on three bacterial data sets and sequenced on three different platforms (Illumina, 454, Ion Torrent). We show that the methods are able to overcome the systematic biases caused by the sequencers and infer the expected phylogenies. It is concluded that the cause of the success of these new procedures is due to a validation of all informative sites that are included in the analysis. The procedures are available as web tools.
NASA Astrophysics Data System (ADS)
Saleh, Joseph Homer; Geng, Fan; Ku, Michelle; Walker, Mitchell L. R.
2017-10-01
With a few hundred spacecraft launched to date with electric propulsion (EP), it is possible to conduct an epidemiological study of EP's on orbit reliability. The first objective of the present work was to undertake such a study and analyze EP's track record of on orbit anomalies and failures by different covariates. The second objective was to provide a comparative analysis of EP's failure rates with those of chemical propulsion. Satellite operators, manufacturers, and insurers will make reliability- and risk-informed decisions regarding the adoption and promotion of EP on board spacecraft. This work provides evidence-based support for such decisions. After a thorough data collection, 162 EP-equipped satellites launched between January 1997 and December 2015 were included in our dataset for analysis. Several statistical analyses were conducted, at the aggregate level and then with the data stratified by severity of the anomaly, by orbit type, and by EP technology. Mean Time To Anomaly (MTTA) and the distribution of the time to (minor/major) anomaly were investigated, as well as anomaly rates. The important findings in this work include the following: (1) Post-2005, EP's reliability has outperformed that of chemical propulsion; (2) Hall thrusters have robustly outperformed chemical propulsion, and they maintain a small but shrinking reliability advantage over gridded ion engines. Other results were also provided, for example the differentials in MTTA of minor and major anomalies for gridded ion engines and Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies very early on orbit, which might be indicative of infant anomalies, and thus would benefit from better ground testing and acceptance procedures; (4) Strong evidence exists that EP anomalies (onset and likelihood) and orbit type are dependent, a dependence likely mediated by either the space environment or differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both infant and wear-out failures, and thus would benefit from a reliability growth program that addresses both these types of problems.
Bizzarri, Anna Rita; Cannistraro, Salvatore
2014-08-22
Atomic force spectroscopy is able to extract kinetic and thermodynamic parameters of biomolecular complexes provided that the registered unbinding force curves could be reliably attributed to the rupture of the specific complex interactions. To this aim, a commonly used strategy is based on the analysis of the stretching features of polymeric linkers which are suitably introduced in the biomolecule-substrate immobilization procedure. Alternatively, we present a method to select force curves corresponding to specific biorecognition events, which relies on a careful analysis of the force fluctuations of the biomolecule-functionalized cantilever tip during its approach to the partner molecules immobilized on a substrate. In the low frequency region, a characteristic 1/f (α) noise with α equal to one (flickering noise) is found to replace white noise in the cantilever fluctuation power spectrum when, and only when, a specific biorecognition process between the partners occurs. The method, which has been validated on a well-characterized antigen-antibody complex, represents a fast, yet reliable alternative to the use of linkers which may involve additional surface chemistry and reproducibility concerns.
Van Spall, Harriette; Kassam, Alisha; Tollefson, Travis T
2015-08-01
Near-miss investigations in high reliability organizations (HROs) aim to mitigate risk and improve system safety. Healthcare settings have a higher rate of near-misses and subsequent adverse events than most high-risk industries, but near-misses are not systematically reported or analyzed. In this review, we will describe the strategies for near-miss analysis that have facilitated a culture of safety and continuous quality improvement in HROs. Near-miss analysis is routine and systematic in HROs such as aviation. Strategies implemented in aviation include the Commercial Aviation Safety Team, which undertakes systematic analyses of near-misses, so that findings can be incorporated into Standard Operating Procedures (SOPs). Other strategies resulting from incident analyses include Crew Resource Management (CRM) for enhanced communication, situational awareness training, adoption of checklists during operations, and built-in redundancy within systems. Health care organizations should consider near-misses as opportunities for quality improvement. The systematic reporting and analysis of near-misses, commonplace in HROs, can be adapted to health care settings to prevent adverse events and improve clinical outcomes.
Villanger, Gro Dehli; Learner, Emily; Longnecker, Matthew P; Ask, Helga; Aase, Heidi; Zoeller, R Thomas; Knudsen, Gun P; Reichborn-Kjennerud, Ted; Zeiner, Pål; Engel, Stephanie M
2017-05-01
Maternal thyroid function is a critical mediator of fetal brain development. Pregnancy-related physiologic changes and handling conditions of blood samples may influence thyroid hormone biomarkers. We investigated the reliability of thyroid hormone biomarkers in plasma of pregnant women under various handling conditions. We enrolled 17 pregnant women; collected serum and plasma were immediately frozen. Additional plasma aliquots were subjected to different handling conditions before the analysis of thyroid biomarkers: storage at room temperature for 24 or 48 hours before freezing and an extra freeze-thaw cycle. We estimated free thyroid hormone indices in plasma based on T3 uptake. High correlations between plasma and serum (>0.94) and intraclass correlation coefficients for plasma handling conditions (0.96 to 1.00) indicated excellent reliability for all thyroid hormone biomarkers. Delayed freezing and freeze-thaw cycles did not affect reliability of biomarkers of thyroid function in plasma during pregnancy. See video abstract at, http://links.lww.com/EDE/B180.
Technical analysis of the Slosson Written Expression Test.
Erford, Bradley T; Hofler, Donald B
2004-06-01
The Slosson Written Expression Test was designed to assess students ages 8-17 years at risk for difficulties in written expression. Scores from three independent samples were used to evaluate the test's reliability and validity for measuring students' written expression. Test-retest reliability of the SWET subscales ranged from .80 to .94 (n = 151), and .95 for the Written Expression Total Standard Scores. The median alternate-form reliability for students' Written Expression Total Standard Scores was .81 across the three forms. Scores on the Slosson test yielded concurrent validity coefficients (n = 143) of .60 with scores from the Woodcock-Johnson: Tests of Achievement-Third Edition Broad Written Language Domain and .49 with scores on the Test of Written Language-Third Edition Spontaneous Writing Quotient. Exploratory factor analytic procedures suggested the Slosson test is comprised of two dimensions, Writing Mechanics and Writing Maturity (47.1% and 20.1% variance accounted for, respectively). In general, the Slosson Written Expression Test presents with sufficient technical characteristics to be considered a useful written expression screening test.
A psychometric evaluation of the Pediatric Anesthesia Emergence Delirium scale.
Ringblom, Jenny; Wåhlin, Ingrid; Proczkowska, Marie
2018-04-01
Emergence delirium and emergence agitation have been a subject of interest since the early 1960s. This behavior has been associated with increased risk of injury in children and dissatisfaction with anesthesia care in their parents. The Pediatric Anesthesia Emergence Delirium Scale is a commonly used instrument for codifying and recording this behavior. The aim of this study was to psychometrically evaluate the Pediatric Anesthesia Emergence Delirium scale, focusing on the factor structure, in a sample of children recovering from anesthesia after surgery or diagnostic procedures. The reliability of the Pediatric Anesthesia Emergence Delirium scale was also tested. One hundred and twenty-two children younger than seven years were observed at postoperative care units during recovery from anesthesia. Two or 3 observers independently assessed the children using the Pediatric Anesthesia Emergence Delirium scale. The factor analysis clearly revealed a one-factor solution, which accounted for 82% of the variation in the data. Internal consistency, calculated with Cronbach's alpha, was good (0.96). The Intraclass Correlation Coefficient, which was used to assess interrater reliability for the Pediatric Anesthesia Emergence Delirium scale sum score, was 0.97 (P < .001). The weighted kappa statistics were almost perfect in 4 of 5 items, with substantial agreement in the fifth (P < .001). The one-factor solution and the satisfactory reliability in terms of internal consistency and stability support the use of the Pediatric Anesthesia Emergence Delirium scale for assessing emergence delirium in children recovering from anesthesia after surgery or diagnostic procedures. The kappa statistics for the Pediatric Anesthesia Emergence Delirium scale items essentially indicated good agreement between independent raters, supporting interrater reliability. © 2018 John Wiley & Sons Ltd.
Development of the Assessment of Belief Conflict in Relationship-14 (ABCR-14).
Kyougoku, Makoto; Teraoka, Mutsumi; Masuda, Noriko; Ooura, Mariko; Abe, Yasushi
2015-01-01
Nurses and other healthcare workers frequently experience belief conflict, one of the most important, new stress-related problems in both academic and clinical fields. In this study, using a sample of 1,683 nursing practitioners, we developed The Assessment of Belief Conflict in Relationship-14 (ABCR-14), a new scale that assesses belief conflict in the healthcare field. Standard psychometric procedures were used to develop and test the scale, including a qualitative framework concept and item-pool development, item reduction, and scale development. We analyzed the psychometric properties of ABCR-14 according to entropy, polyserial correlation coefficient, exploratory factor analysis, confirmatory factor analysis, average variance extracted, Cronbach's alpha, Pearson product-moment correlation coefficient, and multidimensional item response theory (MIRT). The results of the analysis supported a three-factor model consisting of 14 items. The validity and reliability of ABCR-14 was suggested by evidence from high construct validity, structural validity, hypothesis testing, internal consistency reliability, and concurrent validity. The result of the MIRT offered strong support for good item response of item slope parameters and difficulty parameters. However, the ABCR-14 Likert scale might need to be explored from the MIRT point of view. Yet, as mentioned above, there is sufficient evidence to support that ABCR-14 has high validity and reliability. The ABCR-14 demonstrates good psychometric properties for nursing belief conflict. Further studies are recommended to confirm its application in clinical practice.
The Significance of Breakdown Voltages for Quality Assurance of Low-Voltage BME Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander A.
2014-01-01
Application of thin dielectric, base metal electrode (BME) ceramic capacitors for high-reliability applications requires development of testing procedures that can assure high quality and reliability of the parts. In this work, distributions of breakdown voltages (VBR) in variety of low-voltage BME multilayer ceramic capacitors (MLCCs) have been measured and analyzed. It has been shown that analysis of the distributions can indicate the proportion of defective parts in the lot and significance of the defects. Variations of the distributions after solder dip testing allow for an assessment of the robustness of capacitors to soldering-related stresses. The drawbacks of the existing screening and qualification methods to reveal defects in high-value, low-voltage MLCCs and the importance of VBR measurements are discussed. Analysis has shown that due to a larger concentration of oxygen vacancies, defect-related degradation of the insulation resistance (IR) and failures are more likely in BME compared to the precious metal electrode (PME) capacitors.
Page, Mark; Taylor, Jane; Blenkin, Matt
2011-07-01
Many studies regarding the legal status of forensic science have relied on the U.S. Supreme Court's mandate in Daubert v. Merrell Dow Pharmaceuticals Inc., and its progeny in order to make subsequent recommendations or rebuttals. This paper focuses on a more pragmatic approach to analyzing forensic science's immediate deficiencies by considering a qualitative analysis of actual judicial reasoning where forensic identification evidence has been excluded on reliability grounds since the Daubert precedent. Reliance on general acceptance is becoming insufficient as proof of the admissibility of forensic evidence. The citation of unfounded statistics, error rates and certainties, a failure to document the analytical process or follow standardized procedures, and the existence of observe bias represent some of the concerns that have lead to the exclusion or limitation of forensic identification evidence. Analysis of these reasons may serve to refocus forensic practitioners' testimony, resources, and research toward rectifying shortfalls in these areas. © 2011 American Academy of Forensic Sciences.
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
Analysis of field usage failure rate data for plastic encapsulated solid state devices
NASA Technical Reports Server (NTRS)
1981-01-01
Survey and questionnaire techniques were used to gather data from users and manufacturers on the failure rates in the field of plastic encapsulated semiconductors. It was found that such solid state devices are being successfully used by commercial companies which impose certain screening and qualification procedures. The reliability of these semiconductors is now adequate to support their consideration in NASA systems, particularly in low cost systems. The cost of performing necessary screening for NASA applications was assessed.
[Automated procedure for volumetric measurement of metastases: estimation of tumor burden].
Fabel, M; Bolte, H
2008-09-01
Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation.
Reliability and Maintainability Analysis of Fluidic Back-Up Flight Control System and Components.
1981-09-01
industry. 2 r ~~m~ NADC 80227- 60 Maintainability Review of FMEA worksheets indicates that the standard hydraulic components of the servoactuator will...achieved. Procedures for conducting the FMEA and evaluating the 6 & | I NADC 80227- 60 severity of each failure mode are included as Appendix A...KEYSER N62269-81-M-3047 UNCLASSIFIED NADC-80227- 60 NL 66 11111.5 .4 11 6 MICROCOPY RESOLUTION TEST CHART N~ATIONAL BUR[AU Of STANDARDS 1%3A, REPORT
Biomechanical analysis using Kinovea for sports application
NASA Astrophysics Data System (ADS)
Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin
2018-04-01
This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.
RELIABILITY OF THE ONE REPETITION-MAXIMUM POWER CLEAN TEST IN ADOLESCENT ATHLETES
Faigenbaum, Avery D.; McFarland, James E.; Herman, Robert; Naclerio, Fernando; Ratamess, Nicholas A.; Kang, Jie; Myer, Gregory D.
2013-01-01
Although the power clean test is routinely used to assess strength and power performance in adult athletes, the reliability of this measure in younger populations has not been examined. Therefore, the purpose of this study was to determine the reliability of the one repetition maximum (1 RM) power clean in adolescent athletes. Thirty-six male athletes (age 15.9 ± 1.1 yrs, body mass 79.1 ± 20.3 kg, height 175.1 ±7.4 cm) who had more than 1 year of training experience with weightlifting exercises performed a 1 RM power clean on two nonconsecutive days in the afternoon following standardized procedures. All test procedures were supervised by a senior level weightlifting coach and consisted of a systematic progression in test load until the maximum resistance that could be lifted for one repetition using proper exercise technique was determined. Data were analyzed using an intraclass correlation coefficient (ICC [2,k]), Pearson correlation coefficient (r), repeated measures ANOVA, Bland-Altman plot, and typical error analyses. Analysis of the data revealed that the test measures were highly reliable demonstrating a test-retest ICC of 0.98 (95% CI = 0.96–0.99). Testing also demonstrated a strong relationship between 1 RM measures on trial 1 and trial 2 (r=0.98, p<0.0001) with no significant difference in power clean performance between trials (70.6 ± 19.8 vs. 69.8 ± 19.8 kg). Bland Altman plots confirmed no systematic shift in 1 RM between trial 1 and trial 2. The typical error to be expected between 1 RM power clean trials is 2.9 kg and a change of at least 8.0 kg is indicated to determine a real change in lifting performance between tests in young lifters. No injuries occurred during the study period and the testing protocol was well-tolerated by all subjects. These findings indicate that 1 RM power clean testing has a high degree of reproducibility in trained male adolescent athletes when standardized testing procedures are followed and qualified instruction is present. PMID:22233786
Malaei, Reyhane; Ramezani, Amir M; Absalan, Ghodratollah
2018-05-04
A sensitive and reliable ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) procedure was developed and validated for extraction and analysis of malondialdehyde (MDA) as an important lipids-peroxidation biomarker in human plasma. In this methodology, to achieve an applicable extraction procedure, the whole optimization processes were performed in human plasma. To convert MDA into readily extractable species, it was derivatized to hydrazone structure-base by 2,4-dinitrophenylhydrazine (DNPH) at 40 °C within 60 min. Influences of experimental variables on the extraction process including type and volume of extraction and disperser solvents, amount of derivatization agent, temperature, pH, ionic strength, sonication and centrifugation times were evaluated. Under the optimal experimental conditions, the enhancement factor and extraction recovery were 79.8 and 95.8%, respectively. The analytical signal linearly (R 2 = 0.9988) responded over a concentration range of 5.00-4000 ng mL -1 with a limit of detection of 0.75 ng mL -1 (S/N = 3) in the plasma sample. To validate the developed procedure, the recommend guidelines of Food and Drug Administration for bioanalytical analysis have been employed. Copyright © 2018. Published by Elsevier B.V.
A quick and reliable procedure for assessing foot alignment in athletes.
De Michelis Mendonça, Luciana; Bittencourt, Natália Franco Netto; Amaral, Giovanna Mendes; Diniz, Lívia Santos; Souza, Thales Rezende; da Fonseca, Sérgio Teixeira
2013-01-01
Quick procedures with proper psychometric properties that can capture the combined alignment of the foot-ankle complex in a position that may be more representative of the status of the lower limb during ground contact are essential for assessing a large group of athletes. The assessed lower limb was positioned with the calcaneus surface facing upward in a way that all of the marks could be seen at the center of the camera display. After guaranteeing maintenance of the foot at 90° of dorsiflexion actively sustained by the athlete, the examiner took the picture of the foot-ankle alignment. Intraclass correlation coefficients ranging from 0.82 to 0.93 demonstrated excellent intratester and intertester reliability for the proposed measurements of forefoot, rearfoot, and shank-forefoot alignments. The intraclass correlation coefficient between the shank-forefoot measures and the sum of the rearfoot and forefoot measures was 0.98, suggesting that the shank-forefoot alignment measures can represent the combined rearfoot and forefoot alignments. This study describes a reliable and practical measurement procedure for rearfoot, forefoot, and shank-forefoot alignments that can be applied to clinical and research situations as a screening procedure for risk factors for lower-limb injuries in athletes.
Self-motion perception: assessment by computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.
1998-01-01
The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.
An Assessment of the Cloze Procedure as an Advertising Copy Test.
ERIC Educational Resources Information Center
Zinkhan, George; Blair, Edward
1984-01-01
Discusses the effectiveness and reliability of the cloze procedure and its usefulness as an advertising copy-testing technique. Concludes that it can distinguish more memorable from less memorable messages. (FL)
Markin, Abraham; Barbero, Roxana; Leow, Jeffrey J; Groen, Reinou S; Perlman, Greg; Habermann, Elizabeth B; Apelgren, Keith N; Kushner, Adam L; Nwomeh, Benedict C
2014-09-01
In response to the need for simple, rapid means of quantifying surgical capacity in low resource settings, Surgeons OverSeas (SOS) developed the personnel, infrastructure, procedures, equipment and supplies (PIPES) tool. The present investigation assessed the inter-rater reliability of the PIPES tool. As part of a government assessment of surgical services in Santa Cruz, Bolivia, the PIPES tool was translated into Spanish and applied in interviews with physicians at 31 public hospitals. An additional interview was conducted with nurses at a convenience sample of 25 of these hospitals. Physician and nurse responses were then compared to generate an estimate of reliability. For dichotomous survey items, inter-rater reliability between physicians and nurses was assessed using the Cohen's kappa statistic and percent agreement. The Pearson correlation coefficient was used to assess agreement for continuous items. Cohen's kappa was 0.46 for infrastructure, 0.43 for procedures, 0.26 for equipment, and 0 for supplies sections. The median correlation coefficient was 0.91 for continuous items. Correlation was 0.79 for the PIPES index, and ranged from 0.32 to 0.98 for continuous response items. Reliability of the PIPES tool was moderate for the infrastructure and procedures sections, fair for the equipment section, and poor for supplies section when comparing surgeons' responses to nurses' responses-an extremely rigorous test of reliability. These results indicate that the PIPES tool is an effective measure of surgical capacity but that the equipment and supplies sections may need to be revised.
Some ideas and opportunities concerning three-dimensional wind-tunnel wall corrections
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1982-01-01
Opportunities for improving the accuracy and reliability of wall corrections in conventional ventilated test sections are presented. The approach encompasses state-of-the-art technology in transonic computational methods combined with the measurement of tunnel-wall pressures. The objective is to arrive at correction procedures of known, verifiable accuracy that are practical within a production testing environment. It is concluded that: accurate and reliable correction procedures can be developed for cruise-type aerodynamic testing for any wall configuration; passive walls can be optimized for minimal interference for cruise-type aerodynamic testing (tailored slots, variable open area ratio, etc.); monitoring and assessment of noncorrectable interference (buoyancy and curvature in a transonic stream) can be an integral part of a correction procedure; and reasonably good correction procedures can probably be developd for complex flows involving extensive separation and other unpredictable phenomena.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
Who is thought to be a "reliable dentist"? - Lithuanian dentists' opinion.
Puriene, Alina; Balciuniene, Irena; Drobnys, Povilas
2008-01-01
To find out which attributes, according to Lithuanian dentists, are the most important for a "reliable dentist". All the 140 participants of republican dentists' conference were given a questionnaire. The response rate was 64,3%. The answers about the importance of dentist's attributes were presented on a 5-point Likert scale. The statistical data analysis, using the chi2 criterion was carried out. The importance of behaviour during painful and unpleasant procedures, painless treatment and ability to control stressful situations was emphasized by 87%, 83% and 76% of respondents. To add, qualification, communication skills, ability to answer patient's questions clearly, respecting patient's confidentiality were accentuated as well by 78%, 82%, 84% and 74% of dentists. Although, gender was not an essential quality for 78% of respondents, 62% of them reported that dentist's age was very important. Those, who are over 30, emphasized the value of erudition (chi2=0.464; p<0.01), punctuality (chi2=25.467; p=0.001), specialization (chi2=15.808; p<0.05), low treatment cost (chi2=17.393; p<0.05) more significantly than their younger colleagues. No need to wait for a dentist's appointment was appreciated more by respondents, whose work experience is over 30 years (chi2=20.601; p<0.05). Most Lithuanian dentists emphasized the importance of pain management, painless treatment, behaviour during painful and unpleasant procedures, communication skills and ability to answer patient's questions clearly, which are vital for a "reliable dentist".
Willmes, K
1985-08-01
Methods for the analysis of a single subject's test profile(s) proposed by Huber (1973) are applied to the Aachen Aphasia Test (AAT). The procedures are based on the classical test theory model (Lord & Novick, 1968) and are suited for any (achievement) test with standard norms from a large standardization sample and satisfactory reliability estimates. Two test profiles of a Wernicke's aphasic, obtained before and after a 3-month period of speech therapy, are analyzed using inferential comparisons between (groups of) subtest scores on one test application and between two test administrations for single (groups of) subtests. For each of these comparisons, the two aspects of (i) significant (reliable) differences in performance beyond measurement error and (ii) the diagnostic validity of that difference in the reference population of aphasic patients are assessed. Significant differences between standardized subtest scores and a remarkably better preserved reading and writing ability could be found for both test administrations using the multiple test procedure of Holm (1979). Comparison of both profiles revealed an overall increase in performance for each subtest as well as changes in level of performance relations between pairs of subtests.
Maller, S; Singleton, J; Supalla, S; Wix, T
1999-01-01
We describe the procedures for constructing an instrument designed to evaluate children's proficiency in American Sign Language (ASL). The American Sign Language Proficiency Assessment (ASL-PA) is a much-needed tool that potentially could be used by researchers, language specialists, and qualified school personnel. A half-hour ASL sample is collected on video from a target child (between ages 6 and 12) across three separate discourse settings and is later analyzed and scored by an assessor who is highly proficient in ASL. After the child's language sample is scored, he or she can be assigned an ASL proficiency rating of Level 1, 2, or 3. At this phase in its development, substantial evidence of reliability and validity has been obtained for the ASL-PA using a sample of 80 profoundly deaf children (ages 6-12) of varying ASL skill levels. The article first explains the item development and administration of the ASL-PA instrument, then describes the empirical item analysis, standard setting procedures, and evidence of reliability and validity. The ASL-PA is a promising instrument for assessing elementary school-age children's ASL proficiency. Plans for further development are also discussed.
NASA Technical Reports Server (NTRS)
Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola
2005-01-01
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.
Reliability and Maintainability model (RAM) user and maintenance manual. Part 2
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1995-01-01
This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.
The Effect of SSM Grading on Reliability When Residual Items Have No Discriminating Power.
ERIC Educational Resources Information Center
Kane, Michael T.; Moloney, James M.
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Item Reliabilities for a Family of Answer-Until-Correct (AUC) Scoring Rules.
ERIC Educational Resources Information Center
Kane, Michael T.; Moloney, James M.
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
NASA Astrophysics Data System (ADS)
Parker, Gary D.
1986-03-01
Galileo's earliest telescopic measurements are of sufficient quality that their detailed analysis yields scientifically interesting and pedagogically useful results. An optical illusion strongly influences Galileo's observations of Jupiter's moons, as published in the Starry Messenger. A simple procedure identifies individual satellites with sufficient reliability to demonstrate that Galileo regularly underestimated satellite brightness and overestimated elongation when a satellite was very close to Jupiter. The probability of underestimation is a monotonically decreasing function of separation angle, both for Galileo and for viewers of a laboratory simulation of the Jupiter ``starfield'' viewed by Galileo. Analysis of Galileo's records and a simple simulation experiment appropriate to undergraduate courses clarify the scientific problems facing Galileo in interpreting his observations.
High-Performance Liquid Chromatography (HPLC)-Based Detection and Quantitation of Cellular c-di-GMP.
Petrova, Olga E; Sauer, Karin
2017-01-01
The modulation of c-di-GMP levels plays a vital role in the regulation of various processes in a wide array of bacterial species. Thus, investigation of c-di-GMP regulation requires reliable methods for the assessment of c-di-GMP levels and turnover. Reversed-phase high-performance liquid chromatography (RP-HPLC) analysis has become a commonly used approach to accomplish these goals. The following describes the extraction and HPLC-based detection and quantification of c-di-GMP from Pseudomonas aeruginosa samples, a procedure that is amenable to modifications for the analysis of c-di-GMP in other bacterial species.
Random-effects meta-analysis: the number of studies matters.
Guolo, Annamaria; Varin, Cristiano
2017-06-01
This paper investigates the impact of the number of studies on meta-analysis and meta-regression within the random-effects model framework. It is frequently neglected that inference in random-effects models requires a substantial number of studies included in meta-analysis to guarantee reliable conclusions. Several authors warn about the risk of inaccurate results of the traditional DerSimonian and Laird approach especially in the common case of meta-analysis involving a limited number of studies. This paper presents a selection of likelihood and non-likelihood methods for inference in meta-analysis proposed to overcome the limitations of the DerSimonian and Laird procedure, with a focus on the effect of the number of studies. The applicability and the performance of the methods are investigated in terms of Type I error rates and empirical power to detect effects, according to scenarios of practical interest. Simulation studies and applications to real meta-analyses highlight that it is not possible to identify an approach uniformly superior to alternatives. The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches. R code for meta-analysis according to all of the inferential methods examined in the paper is provided.
NASA Technical Reports Server (NTRS)
Wier, C. E.; Wobber, F. J.; Russell, O. R.; Martin, K. R. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Mined land reclamation analysis procedures developed within the Indiana portion of the Illinois Coal Basin were independently tested in Ohio utilizing 1:80,000 scale enlargements of ERTS-1 image 1029-15361-7 (dated August 21, 1972). An area in Belmont County was selected for analysis due to the extensive surface mining and the different degrees of reclamation occurring in this area. Contour mining in this area provided the opportunity to extend techniques developed for analysis of relatively flat mining areas in Indiana to areas of rolling topography in Ohio. The analysts had no previous experience in the area. Field investigations largely confirmed office analysis results although in a few areas estimates of vegetation percentages were found to be too high. In one area this error approximated 25%. These results suggest that systematic ERTS-1 analysis in combination with selective field sampling can provide reliable vegetation percentage estimates in excess of 25% accuracy with minimum equipment investment and training. The utility of ERTS-1 for practical and reasonably reliable update of mined lands information for groups with budget limitations is suggested. Many states can benefit from low cost updates using ERTS-1 imagery from public sources.
18 CFR 39.5 - Reliability Standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... 39.5 Section 39.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT RULES CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC...
Validation of urban freeway models. [supporting datasets
DOT National Transportation Integrated Search
2015-01-01
The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...
18 CFR 39.2 - Jurisdiction and applicability.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT RULES CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC... and Hawaii), the Electric Reliability Organization, any Regional Entities, and all users, owners and...
Premixed Digestion Salts for Kjeldahl Determination of Total Nitrogen in Selected Forest Soils
B. G. Blackmon
1971-01-01
Estimates of total soil nitrogen by a standard Kjeldahl procedure and a modified procedure employing packets of premixed digestion salts were closely correlated. (r2 = 0.983). The modified procedure appears to be as reliable all the standard method for determining total nitrogen in southern alluvial forest soils.
Pourahmadi, Mohammad Reza; Ebrahimi Takamjani, Ismail; Jaberzadeh, Shapour; Sarrafzadeh, Javad; Sanjari, Mohammad Ali; Bagheri, Rasool; Jannati, Elham
2018-06-01
Sit-to-stand (STD) and stand-to-sit (SIT) analysis can provide information on functional independence in daily activities in patients with low back pain (LBP). However, in order for measurements to be clinically useful, data on psychometric properties should be available. The main purpose was to investigate intra-rater reliability of STD and SIT tasks in participants with and without chronic non-specific LBP (CNLBP). The second purpose was to detect any differences in lumbar spine and hips sagittal plane kinematics and coordination between asymptomatic individuals and CNLBP patients during STD and SIT. Cross-sectional study. Twenty-three CNLBP patients and 23 demographically-matched controls were recruited. Ten markers were placed on specific anatomical landmarks. Participants were asked to perform STD and SIT at a preferred speed. Peak flexion angles, mean angular velocities, lumbar to hip movement ratios, and relative phase angles were measured. The procedure was repeated after 2 h and 6-8 days. Differences between two groups were analyzed using independent t-test. Intraclass correlation coefficient (ICC 3,k), standard error of measurement (SEM), and limits of agreement (LOAs) were also estimated. The ICC values showed moderate to excellent intra-rater reliability, with relatively low SEM values (≤10.17°). The 95% LOAs demonstrated that there were no differences between the measured parameters. Furthermore, CNLBP patients had limited sagittal plane angles, smaller angular velocities, and lumbar-hip dis-coordination compared to asymptomatic participants. The results indicated moderate to excellent test-retest reliability of STD and SIT analysis. Moreover, CNLBP patients had altered kinematics during STD and its reverse. Copyright © 2017 Elsevier Ltd. All rights reserved.
An automated procedure for detection of IDP's dwellings using VHR satellite imagery
NASA Astrophysics Data System (ADS)
Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre
2011-11-01
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.
Use of an Objective Structured Assessment of Technical Skill After a Sports Medicine Rotation.
Dwyer, Tim; Slade Shantz, Jesse; Kulasegaram, Kulamakan Mahan; Chahal, Jaskarndip; Wasserstein, David; Schachar, Rachel; Devitt, Brian; Theodoropoulos, John; Hodges, Brian; Ogilvie-Harris, Darrell
2016-12-01
The purpose of this study was to determine if the use of an Objective Structured Assessment of Technical skill (OSATS), using dry models, would be a valid method of assessing residents' ability to perform sports medicine procedures after training in a competency-based model. Over 18 months, 27 residents (19 junior [postgraduate year (PGY) 1-3] and 8 senior [PGY 4-5]) sat the OSATS after their rotation, in addition to 14 sports medicine staff and fellows. Each resident was provided a list of 10 procedures in which they were expected to show competence. At the end of the rotation, each resident undertook an OSATS composed of 6 stations sampled from the 10 procedures using dry models-faculty used the Arthroscopic Surgical Skill Evaluation Tool (ASSET), task-specific checklists, as well as an overall 5-point global rating scale (GRS) to score each resident. Each procedure was videotaped for blinded review. The overall reliability of the OSATS (0.9) and the inter-rater reliability (0.9) were both high. A significant difference by year in training was seen for the overall GRS, the total ASSET score, and the total checklist score, as well as for each technical procedure (P < .001). Further analysis revealed a significant difference in the total ASSET score between junior (mean 18.4, 95% confidence interval [CI] 16.8 to 19.9) and senior residents (24.2, 95% CI 22.7 to 25.6), senior residents and fellows (30.1, 95% CI 28.2 to 31.9), as well as between fellows and faculty (37, 95% CI 36.1 to 27.8) (P < .05). The results of this study show that an OSATS using dry models shows evidence of validity when used to assess performance of technical procedures after a sports medicine rotation. However, junior residents were not able to perform as well as senior residents, suggesting that overall surgical experience is as important as intensive teaching. As postgraduate medical training shifts to a competency-based model, methods of assessing performance of technical procedures become necessary. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Vertical jumping tests in volleyball: reliability, validity, and playing-position specifics.
Sattler, Tine; Sekulic, Damir; Hadzic, Vedran; Uljevic, Ognjen; Dervisevic, Edvin
2012-06-01
Vertical jumping is known to be important in volleyball, and jumping performance tests are frequently studied for their reliability and validity. However, most studies concerning jumping in volleyball have dealt with standard rather than sport-specific jumping procedures and tests. The aims of this study, therefore, were (a) to determine the reliability and factorial validity of 2 volleyball-specific jumping tests, the block jump (BJ) test and the attack jump (AJ) test, relative to 2 frequently used and systematically validated jumping tests, the countermovement jump test and the squat jump test and (b) to establish volleyball position-specific differences in the jumping tests and simple anthropometric indices (body height [BH], body weight, and body mass index [BMI]). The BJ was performed from a defensive volleyball position, with the hands positioned in front of the chest. During an AJ, the players used a 2- to 3-step approach and performed a drop jump with an arm swing followed by a quick vertical jump. A total of 95 high-level volleyball players (all men) participated in this study. The reliability of the jumping tests ranged from 0.97 to 0.99 for Cronbach's alpha coefficients, from 0.93 to 0.97 for interitem correlation coefficients and from 2.1 to 2.8 for coefficients of variation. The highest reliability was found for the specific jumping tests. The factor analysis extracted one significant component, and all of the tests were highly intercorrelated. The analysis of variance with post hoc analysis showed significant differences between 5 playing positions in some of the jumping tests. In general, receivers had a greater jumping capacity, followed by libero players. The differences in jumping capacities should be emphasized vis-a-vis differences in the anthropometric measures of players, where middle hitters had higher BH and body weight, followed by opposite hitters and receivers, with no differences in the BMI between positions.
Slootweg, Irene A.; Lombarts, Kiki M. J. M. H.; Boerebach, Benjamin C. M.; Heineman, Maas Jan; Scherpbier, Albert J. J. A.; van der Vleuten, Cees P. M.
2014-01-01
Background Teamwork between clinical teachers is a challenge in postgraduate medical training. Although there are several instruments available for measuring teamwork in health care, none of them are appropriate for teaching teams. The aim of this study is to develop an instrument (TeamQ) for measuring teamwork, to investigate its psychometric properties and to explore how clinical teachers assess their teamwork. Method To select the items to be included in the TeamQ questionnaire, we conducted a content validation in 2011, using a Delphi procedure in which 40 experts were invited. Next, for pilot testing the preliminary tool, 1446 clinical teachers from 116 teaching teams were requested to complete the TeamQ questionnaire. For data analyses we used statistical strategies: principal component analysis, internal consistency reliability coefficient, and the number of evaluations needed to obtain reliable estimates. Lastly, the median TeamQ scores were calculated for teams to explore the levels of teamwork. Results In total, 31 experts participated in the Delphi study. In total, 114 teams participated in the TeamQ pilot. The median team response was 7 evaluations per team. The principal component analysis revealed 11 factors; 8 were included. The reliability coefficients of the TeamQ scales ranged from 0.75 to 0.93. The generalizability analysis revealed that 5 to 7 evaluations were needed to obtain internal reliability coefficients of 0.70. In terms of teamwork, the clinical teachers scored residents' empowerment as the highest TeamQ scale and feedback culture as the area that would most benefit from improvement. Conclusions This study provides initial evidence of the validity of an instrument for measuring teamwork in teaching teams. The high response rates and the low number of evaluations needed for reliably measuring teamwork indicate that TeamQ is feasible for use by teaching teams. Future research could explore the effectiveness of feedback on teamwork in follow up measurements. PMID:25393006
Slootweg, Irene A; Lombarts, Kiki M J M H; Boerebach, Benjamin C M; Heineman, Maas Jan; Scherpbier, Albert J J A; van der Vleuten, Cees P M
2014-01-01
Teamwork between clinical teachers is a challenge in postgraduate medical training. Although there are several instruments available for measuring teamwork in health care, none of them are appropriate for teaching teams. The aim of this study is to develop an instrument (TeamQ) for measuring teamwork, to investigate its psychometric properties and to explore how clinical teachers assess their teamwork. To select the items to be included in the TeamQ questionnaire, we conducted a content validation in 2011, using a Delphi procedure in which 40 experts were invited. Next, for pilot testing the preliminary tool, 1446 clinical teachers from 116 teaching teams were requested to complete the TeamQ questionnaire. For data analyses we used statistical strategies: principal component analysis, internal consistency reliability coefficient, and the number of evaluations needed to obtain reliable estimates. Lastly, the median TeamQ scores were calculated for teams to explore the levels of teamwork. In total, 31 experts participated in the Delphi study. In total, 114 teams participated in the TeamQ pilot. The median team response was 7 evaluations per team. The principal component analysis revealed 11 factors; 8 were included. The reliability coefficients of the TeamQ scales ranged from 0.75 to 0.93. The generalizability analysis revealed that 5 to 7 evaluations were needed to obtain internal reliability coefficients of 0.70. In terms of teamwork, the clinical teachers scored residents' empowerment as the highest TeamQ scale and feedback culture as the area that would most benefit from improvement. This study provides initial evidence of the validity of an instrument for measuring teamwork in teaching teams. The high response rates and the low number of evaluations needed for reliably measuring teamwork indicate that TeamQ is feasible for use by teaching teams. Future research could explore the effectiveness of feedback on teamwork in follow up measurements.
Walsh, Sinead; Horgan, Jennifer; May, Richard J; Dymond, Simon; Whelan, Robert
2014-01-01
The Relational Completion Procedure is effective for establishing same, opposite and comparative derived relations in verbally able adults, but to date it has not been used to establish relational frames in young children or those with developmental delay. In Experiment 1, the Relational Completion Procedure was used with the goal of establishing two 3-member sameness networks in nine individuals with Autism Spectrum Disorder (eight with language delay). A multiple exemplar intervention was employed to facilitate derived relational responding when required. Seven of nine participants in Experiment 1 passed tests for derived relations. In Experiment 2, eight participants (all of whom, except one, had a verbal repertoire) were given training with the aim of establishing two 4-member sameness networks. Three of these participants were typically developing young children aged between 5 and 6 years old, all of whom demonstrated derived relations, as did four of the five participants with developmental delay. These data demonstrate that it is possible to reliably establish derived relations in young children and those with developmental delay using an automated procedure. © Society for the Experimental Analysis of Behavior.
Constales, Denis; Yablonsky, Gregory S.; Wang, Lucun; ...
2017-04-25
This paper presents a straightforward and user-friendly procedure for extracting a reactivity characterization of catalytic reactions on solid materials under non-steady-state conditions, particularly in temporal analysis of products (TAP) experiments. The kinetic parameters derived by this procedure can help with the development of detailed mechanistic understanding. The procedure consists of the following two major steps: 1) Three “Laplace reactivities” are first determined based on the moments of the exit flow pulse response data; 2) Depending on a select kinetic model, kinetic constants of elementary reaction steps can then be expressed as a function of reactivities and determined accordingly. In particular,more » we distinguish two calculation methods based on the availability and reliability of reactant and product data. The theoretical results are illustrated using a reverse example with given parameters as well as an experimental example of CO oxidation over a supported Au/SiO 2 catalyst. The procedure presented here provides an efficient tool for kinetic characterization of many complex chemical reactions.« less
Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.
Sutureless laparoscopic heminephrectomy using laser tissue soldering.
Ogan, Kenneth; Jacomides, Lucas; Saboorian, Hossein; Koeneman, Kenneth; Li, Yingming; Napper, Cheryl; Hoopman, John; Pearle, Margaret S; Cadeddu, Jeffrey A
2003-06-01
Widespread application of laparoscopic partial nephrectomy has been limited by the lack of a reliable means of attaining hemostasis. We describe laser tissue welding using human albumin as a solder to control bleeding and seal the collecting system during laparoscopic heminephrectomy in a porcine model. Laparoscopic left lower-pole heminephrectomy was performed in five female domestic pigs after occluding the hilar vessels. Using an 810-nm pulsed diode laser (20 W), a 50% liquid albumin-indocyanine green solder was welded to the cut edge of the renal parenchyma to seal the collecting system and achieve hemostasis. Two weeks later, an identical procedure was performed on the right kidney, after which, the animals were sacrificed and both kidneys were harvested for ex vivo retrograde pyelograms and histopathologic analysis. All 10 heminephrectomies were performed without complication. The mean operative time was 82 minutes, with an average blood loss of 43.5 mL per procedure. The mean warm ischemia time was 11.7 minutes. For each heminephrectomy, a mean of 4.2 mL of solder was welded to the cut parenchymal surface. In three of the five acute kidneys and all five 2-week kidneys, ex vivo retrograde pyelograms demonstrated no extravasation. In addition, no animal had clinical evidence of urinoma or delayed hemorrhage. Histopathologic analysis showed preservation of the renal parenchyma immediately beneath the solder. Laser tissue welding provided reliable hemostasis and closure of the collecting system while protecting the underlying parenchyma from the deleterious effect of the laser during porcine laparoscopic heminephrectomy.
Ex post damage assessment: an Italian experience
NASA Astrophysics Data System (ADS)
Molinari, D.; Menoni, S.; Aronica, G. T.; Ballio, F.; Berni, N.; Pandolfo, C.; Stelluti, M.; Minucci, G.
2014-04-01
In recent years, awareness of a need for more effective disaster data collection, storage, and sharing of analyses has developed in many parts of the world. In line with this advance, Italian local authorities have expressed the need for enhanced methods and procedures for post-event damage assessment in order to obtain data that can serve numerous purposes: to create a reliable and consistent database on the basis of which damage models can be defined or validated; and to supply a comprehensive scenario of flooding impacts according to which priorities can be identified during the emergency and recovery phase, and the compensation due to citizens from insurers or local authorities can be established. This paper studies this context, and describes ongoing activities in the Umbria and Sicily regions of Italy intended to identifying new tools and procedures for flood damage data surveys and storage in the aftermath of floods. In the first part of the paper, the current procedures for data gathering in Italy are analysed. The analysis shows that the available knowledge does not enable the definition or validation of damage curves, as information is poor, fragmented, and inconsistent. A new procedure for data collection and storage is therefore proposed. The entire analysis was carried out at a local level for the residential and commercial sectors only. The objective of the next steps for the research in the short term will be (i) to extend the procedure to other types of damage, and (ii) to make the procedure operational with the Italian Civil Protection system. The long-term aim is to develop specific depth-damage curves for Italian contexts.
Wind wave analysis in depth limited water using OCEANLYZ, A MATLAB toolbox
NASA Astrophysics Data System (ADS)
Karimpour, Arash; Chen, Qin
2017-09-01
There are a number of well established methods in the literature describing how to assess and analyze measured wind wave data. However, obtaining reliable results from these methods requires adequate knowledge on their behavior, strengths and weaknesses. A proper implementation of these methods requires a series of procedures including a pretreatment of the raw measurements, and adjustment and refinement of the processed data to provide quality assurance of the outcomes, otherwise it can lead to untrustworthy results. This paper discusses potential issues in these procedures, explains what parameters are influential for the outcomes and suggests practical solutions to avoid and minimize the errors in the wave results. The procedure of converting the water pressure data into the water surface elevation data, treating the high frequency data with a low signal-to-noise ratio, partitioning swell energy from wind sea, and estimating the peak wave frequency from the weighted integral of the wave power spectrum are described. Conversion and recovery of the data acquired by a pressure transducer, particularly in depth-limited water like estuaries and lakes, are explained in detail. To provide researchers with tools for a reliable estimation of wind wave parameters, the Ocean Wave Analyzing toolbox, OCEANLYZ, is introduced. The toolbox contains a number of MATLAB functions for estimation of the wave properties in time and frequency domains. The toolbox has been developed and examined during a number of the field study projects in Louisiana's estuaries.
Oliver, Jeremie D; Menapace, Deanna; Younes, Ahmed; Recker, Chelsey; Hamilton, Grant; Friedman, Oren
2018-02-01
Although periorbital edema and ecchymosis are commonly encountered after facial plastic and reconstructive surgery procedures, there is currently no validated grading scale to qualify these findings. In this study, the modified "Surgeon Periorbital Rating of Edema and Ecchymosis (SPREE)" questionnaire is used as a grading scale for patients undergoing facial plastic surgery procedures. This article aims to validate a uniform grading scale for periorbital edema and ecchymosis using the modified SPREE questionnaire in the postoperative period. This is a prospective study including 82 patients at two different routine postoperative visits (second and seventh postoperative days), wherein the staff and resident physicians, physician assistants (PAs), patients, and any accompanying adults were asked to use the modified SPREE questionnaire to score edema and ecchymosis of each eye of the patient who had undergone a plastic surgery procedure. Interrater and intrarater agreements were then examined. Cohen's kappa coefficient was calculated to measure intrarater and interrater agreement between health care professionals (staff physicians and resident physicians); staff physicians and PAs; and staff physicians, patients, and accompanying adults. Good to excellent agreement was identified between staff physicians and resident physicians as well as between staff physicians and PAs. There was, however, poor agreement between staff physicians, patients, and accompanying adults. In addition, excellent agreement was found for intraobserver reliability during same-day visits. The modified SPREE questionnaire is a validated grading system for use by health care professionals to reliably rate periorbital edema and ecchymosis in the postoperative period. Validation of the modified SPREE questionnaire may improve ubiquity in medical literature reporting and related outcomes reporting in future. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Limiting excessive postoperative blood transfusion after cardiac procedures. A review.
Ferraris, V A; Ferraris, S P
1995-01-01
Analysis of blood product use after cardiac operations reveals that a few patients (< or = 20%) consume the majority of blood products (> 80%). The risk factors that predispose a minority of patients to excessive blood use include patient-related factors, transfusion practices, drug-related causes, and procedure-related factors. Multivariate studies suggest that patient age and red blood cell volume are independent patient-related variables that predict excessive blood product transfusion after cardiac procedures. Other factors include preoperative aspirin ingestion, type of operation, over- or underutilization of heparin during cardiopulmonary bypass, failure to correct hypothermia after cardiopulmonary bypass, and physician overtransfusion. A survey of the currently available blood conservation techniques reveals 5 that stand out as reliable methods: 1) high-dose aprotinin therapy, 2) preoperative erythropoietin therapy when time permits adequate dosage before operation, 3) hemodilution by harvest of whole blood immediately before cardiopulmonary bypass, 4) autologous predonation of blood, and 5) salvage of oxygenator blood after cardiopulmonary bypass. Other methods, such as the use of epsilon-aminocaproic acid or desmopressin, cell saving devices, reinfusion of shed mediastinal blood, and hemofiltration have been reported to be less reliable and may even be harmful in some high-risk patients. Consideration of the available data allows formulation of a 4-pronged plan for limiting excessive blood transfusion after surgery: 1) recognize the causes of excessive transfusion, including the importance of red blood cell volume, type of procedure being performed, preoperative aspirin ingestion, etc.; 2) establish a quality management program, including a survey of transfusion practices that emphasizes physician education and availability of real-time laboratory testing to guide transfusion therapy; 3) adopt a multimodal approach using institution-proven techniques; and 4) continually reassess blood product use and analyze the cost-benefits of blood conservation interventions. PMID:7580359
Bulk transmission system component outage data base. Research project 1283-1. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albrecht, P.F.; Heising, C.R.; Patton, A.D.
1981-04-01
This project is responsive to the premise that the successful analysis of equipment reliability and system adequacy in bulk transmission system planning and system operations requires data on equipment failure rates, maintenance outage rates and repair times. The objective of the project is to develop a system of consistent definitions, formats and procedures which can be used in the collection of such data in a well designed outage data bank. The project consisted of four interrelated phases, beginning with a review of related work and problem definition and ending with a discussion of data base organization and management. The reviewmore » of related work quickly pointed out that two schools of thought exist on data collection. One group contends that data should be collected on bulk transmission system physical equipments, such as transformers, circuit breakers, etc., and the other group supports data collection on functional transmission lines, including the terminal equipment, which have been defined as transmission units in this report. A compromise between these two approaches was imperative for successful completion of the work. The second phase investigated the data needed for reliability evaluation. The applications of the data bank were enumerated leading to a list of basic data needed when recording an incident. Phase 3 concentrated on developing procedures for data collection using forms to collect data both on outages and on the equipment design. Finally, the aspects of data base organization and management were explored and general recommendations made appropriate to this specific application. The project did not succeed in completely defining the procedures, particularly for multiple outages, but the ground work has been laid for a pilot data collection effort to refine the procedures before wide scale implementation by the utility industry.« less
The Behavior Pain Assessment Tool for critically ill adults: a validation study in 28 countries.
Gélinas, Céline; Puntillo, Kathleen A; Levin, Pavel; Azoulay, Elie
2017-05-01
Many critically ill adults are unable to communicate their pain through self-report. The study purpose was to validate the use of the 8-item Behavior Pain Assessment Tool (BPAT) in patients hospitalized in 192 intensive care units from 28 countries. A total of 4812 procedures in 3851 patients were included in data analysis. Patients were assessed with the BPAT before and during procedures by 2 different raters (mostly nurses and physicians). Those who were able to self-report were asked to rate their pain intensity and pain distress on 0 to 10 numeric rating scales. Interrater reliability of behavioral observations was supported by moderate (0.43-0.60) to excellent (>0.60) kappa coefficients. Mixed effects multilevel logistic regression models showed that most behaviors were more likely to be present during the procedure than before and in less sedated patients, demonstrating discriminant validation of the tool use. Regarding criterion validation, moderate positive correlations were found during procedures between the mean BPAT scores and the mean pain intensity (r = 0.54) and pain distress (r = 0.49) scores (P < 0.001). Regression models showed that all behaviors were significant predictors of pain intensity and pain distress, accounting for 35% and 29% of their total variance, respectively. A BPAT cut-point score >3.5 could classify patients with or without severe levels (≥8) of pain intensity and distress with sensitivity and specificity findings ranging from 61.8% to 75.1%. The BPAT was found to be reliable and valid. Its feasibility for use in practice and the effect of its clinical implementation on patient pain and intensive care unit outcomes need further research.
Water-quality sampling by the U.S. Geological Survey-Standard protocols and procedures
Wilde, Franceska D.
2010-01-01
Thumbnail of and link to report PDF (1.0 MB) The U.S. Geological Survey (USGS) develops the sampling procedures and collects the data necessary for the accurate assessment and wise management of our Nation's surface-water and groundwater resources. Federal and State agencies, water-resource regulators and managers, and many organizations and interested parties in the public and private sectors depend on the reliability, timeliness, and integrity of the data we collect and the scientific soundness and impartiality of our data assessments and analysis. The standard data-collection methods uniformly used by USGS water-quality personnel are peer reviewed, kept up-to-date, and published in the National Field Manual for the Collection of Water-Quality Data (http://pubs.water.usgs.gov/twri9A/).
Albrecht, A; Levenson, B; Göhring, S; Haerer, W; Reifart, N; Ringwald, G; Troger, B
2009-10-01
QuIK is the German acronym for QUality Assurance in Invasive Cardiology. It describes the continuous project of an electronic data collection in Cardiac catheterization laboratories all over Germany. Mainly members of the German Society of Cardiologists in Private Practice (BNK) participate in this computer based project. Since 1996 data of diagnostic and interventional procedures are collected and send to a registry-center where a regular benchmarking analysis of the results is performed. Part of the project is a yearly auditing process including an on-site visit to the cath lab to guarantee for the reliability of information collected. Since 1996 about one million procedures have been documented. Georg Thieme Verlag KG Stuttgart , New York.
Pérez-Castilla, Alejandro; McMahon, John J; Comfort, Paul; García-Ramos, Amador
2017-07-31
The aims of this study were to compare the reliability and magnitude of jump height between the two standard procedures of analysing force platform data to estimate jump height (take-off velocity [TOV] and flight time [FT]) in the loaded squat jump (SJ) exercise performed with a free-weight barbell and in a Smith machine. Twenty-three collegiate men (age 23.1 ± 3.2 years, body mass 74.7 ± 7.3 kg, height 177.1 ± 7.0 cm) were tested twice for each SJ type (free-weight barbell and Smith machine) with 17, 30, 45, 60, and 75 kg loads. No substantial differences in reliability were observed between the TOV (Coefficient of variation [CV]: 9.88%; Intraclass correlation coefficient [ICC]: 0.82) and FT (CV: 8.68%; ICC: 0.88) procedures (CV ratio: 1.14), while the Smith SJ (CV: 7.74%; ICC: 0.87) revealed a higher reliability than the free-weight SJ (CV: 9.88%; ICC: 0.81) (CV ratio: 1.28). The TOV procedure provided higher magnitudes of jump height than the FT procedure for the loaded Smith machine SJ (systematic bias: 2.64 cm; P<0.05), while no significant differences between the TOV and FT procedures were observed in the free-weight SJ exercise (systematic bias: 0.26 cm; P>0.05). Heteroscedasticity of the errors was observed for the Smith machine SJ (r: 0.177) with increasing differences in favour of the TOV procedure for the trials with lower jump height (i.e. higher external loads). Based on these results the use of a Smith machine in conjunction with the FT more accurately determine jump height during the loaded SJ.
NASA Astrophysics Data System (ADS)
Pohl, L.; Kaiser, M.; Ketelhut, S.; Pereira, S.; Goycoolea, F.; Kemper, Björn
2016-03-01
Digital holographic microscopy (DHM) enables high resolution non-destructive inspection of technical surfaces and minimally-invasive label-free live cell imaging. However, the analysis of confluent cell layers represents a challenge as quantitative DHM phase images in this case do not provide sufficient information for image segmentation, determination of the cellular dry mass or calculation of the cell thickness. We present novel strategies for the analysis of confluent cell layers with quantitative DHM phase contrast utilizing a histogram based-evaluation procedure. The applicability of our approach is illustrated by quantification of drug induced cell morphology changes and it is shown that the method is capable to quantify reliable global morphology changes of confluent cell layers.
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
Accounting for Proof Test Data in a Reliability Based Design Optimization Framework
NASA Technical Reports Server (NTRS)
Ventor, Gerharad; Scotti, Stephen J.
2012-01-01
This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.
Measuring competence in endoscopic sinus surgery.
Syme-Grant, J; White, P S; McAleer, J P G
2008-02-01
Competence based education is currently being introduced into higher surgical training in the UK. Valid and reliable performance assessment tools are essential to ensure competencies are achieved. No such tools have yet been reported in the UK literature. We sought to develop and pilot test an Endoscopic Sinus Surgery Competence Assessment Tool (ESSCAT). The ESSCAT was designed for in-theatre assessment of higher surgical trainees in the UK. The ESSCAT rating matrix was developed through task analysis of ESS procedures. All otolaryngology consultants and specialist registrars in Scotland were given the opportunity to contribute to its refinement. Two cycles of in-theatre testing were used to ensure utility and gather quantitative data on validity and reliability. Videos of trainees performing surgery were used in establishing inter-rater reliability. National consultation, the consensus derived minimum standard of performance, Cronbach's alpha = 0.89 and demonstration of trainee learning (p = 0.027) during the in vivo application of the ESSCAT suggest a high level of validity. Inter-rater reliability was moderate for competence decisions (Cohen's Kappa = 0.5) and good for total scores (Intra-Class Correlation Co-efficient = 0.63). Intra-rater reliability was good for both competence decisions (Kappa = 0.67) and total scores (Kendall's Tau-b = 0.73). The ESSCAT generates a valid and reliable assessment of trainees' in-theatre performance of endoscopic sinus surgery. In conjunction with ongoing evaluation of the instrument we recommend the use of the ESSCAT in higher specialist training in otolaryngology in the UK.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
Scholma, Jetse; Fuhler, Gwenny M.; Joore, Jos; Hulsman, Marc; Schivo, Stefano; List, Alan F.; Reinders, Marcel J. T.; Peppelenbosch, Maikel P.; Post, Janine N.
2016-01-01
Massive parallel analysis using array technology has become the mainstay for analysis of genomes and transcriptomes. Analogously, the predominance of phosphorylation as a regulator of cellular metabolism has fostered the development of peptide arrays of kinase consensus substrates that allow the charting of cellular phosphorylation events (often called kinome profiling). However, whereas the bioinformatical framework for expression array analysis is well-developed, no advanced analysis tools are yet available for kinome profiling. Especially intra-array and interarray normalization of peptide array phosphorylation remain problematic, due to the absence of “housekeeping” kinases and the obvious fallacy of the assumption that different experimental conditions should exhibit equal amounts of kinase activity. Here we describe the development of analysis tools that reliably quantify phosphorylation of peptide arrays and that allow normalization of the signals obtained. We provide a method for intraslide gradient correction and spot quality control. We describe a novel interarray normalization procedure, named repetitive signal enhancement, RSE, which provides a mathematical approach to limit the false negative results occuring with the use of other normalization procedures. Using in silico and biological experiments we show that employing such protocols yields superior insight into cellular physiology as compared to classical analysis tools for kinome profiling. PMID:27225531
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Estimation of the behavior factor of existing RC-MRF buildings
NASA Astrophysics Data System (ADS)
Vona, Marco; Mastroberti, Monica
2018-01-01
In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.
Pre-analytical issues in the haemostasis laboratory: guidance for the clinical laboratories.
Magnette, A; Chatelain, M; Chatelain, B; Ten Cate, H; Mullier, F
2016-01-01
Ensuring quality has become a daily requirement in laboratories. In haemostasis, even more than in other disciplines of biology, quality is determined by a pre-analytical step that encompasses all procedures, starting with the formulation of the medical question, and includes patient preparation, sample collection, handling, transportation, processing, and storage until time of analysis. This step, based on a variety of manual activities, is the most vulnerable part of the total testing process and is a major component of the reliability and validity of results in haemostasis and constitutes the most important source of erroneous or un-interpretable results. Pre-analytical errors may occur throughout the testing process and arise from unsuitable, inappropriate or wrongly handled procedures. Problems may arise during the collection of blood specimens such as misidentification of the sample, use of inadequate devices or needles, incorrect order of draw, prolonged tourniquet placing, unsuccessful attempts to locate the vein, incorrect use of additive tubes, collection of unsuitable samples for quality or quantity, inappropriate mixing of a sample, etc. Some factors can alter the result of a sample constituent after collection during transportation, preparation and storage. Laboratory errors can often have serious adverse consequences. Lack of standardized procedures for sample collection accounts for most of the errors encountered within the total testing process. They can also have clinical consequences as well as a significant impact on patient care, especially those related to specialized tests as these are often considered as "diagnostic". Controlling pre-analytical variables is critical since this has a direct influence on the quality of results and on their clinical reliability. The accurate standardization of the pre-analytical phase is of pivotal importance for achieving reliable results of coagulation tests and should reduce the side effects of the influence factors. This review is a summary of the most important recommendations regarding the importance of pre-analytical factors for coagulation testing and should be a tool to increase awareness about the importance of pre-analytical factors for coagulation testing.
Analysis of the factors that impact the reliability of high level waste canister materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, W.K.; Hall, A.M.
1977-09-19
The analysis encompassed identification and analysis of potential threats to canister integrity arising in the course of waste solidification, interim storage at the fuels reprocessing plant, wet and dry shipment, and geologic storage. Fabrication techniques and quality assurance requirements necessary to insure optimum canister reliability were considered taking into account such factors as welding procedure, surface preparation, stress relief, remote weld closure, and inspection methods. Alternative canister materials and canister systems were also considered in terms of optimum reliability in the face of threats to the canister's integrity, ease of fabrication, inspection, handling and cost. If interim storage in airmore » is admissible, the sequence suggested comprises producing a glass-type waste product in a continuous ceramic melter, pouring into a carbon steel or low-alloy steel canister of moderately heavy wall thickness, storing in air upright on a pad and surrounded by a concrete radiation shield, and thereafter placing in geologic storage without overpacking. Should the decision be to store in water during the interim period, then use of either a 304 L stainless steel canister overpacked with a solution-annealed and fast-cooled 304 L container, or a single high-alloy canister, is suggested. The high alloy may be Inconel 600, Incoloy Alloy 800, or Incoloy Alloy 825. In either case, it is suggested that the container be overpacked with a moderately heavy wall carbon steel or low-alloy steel cask for geologic storage to ensure ready retrievability. 19 figs., 5 tables.« less
Development of the Assessment of Belief Conflict in Relationship-14 (ABCR-14)
Kyougoku, Makoto; Teraoka, Mutsumi; Masuda, Noriko; Ooura, Mariko; Abe, Yasushi
2015-01-01
Purpose Nurses and other healthcare workers frequently experience belief conflict, one of the most important, new stress-related problems in both academic and clinical fields. Methods In this study, using a sample of 1,683 nursing practitioners, we developed The Assessment of Belief Conflict in Relationship-14 (ABCR-14), a new scale that assesses belief conflict in the healthcare field. Standard psychometric procedures were used to develop and test the scale, including a qualitative framework concept and item-pool development, item reduction, and scale development. We analyzed the psychometric properties of ABCR-14 according to entropy, polyserial correlation coefficient, exploratory factor analysis, confirmatory factor analysis, average variance extracted, Cronbach’s alpha, Pearson product-moment correlation coefficient, and multidimensional item response theory (MIRT). Results The results of the analysis supported a three-factor model consisting of 14 items. The validity and reliability of ABCR-14 was suggested by evidence from high construct validity, structural validity, hypothesis testing, internal consistency reliability, and concurrent validity. The result of the MIRT offered strong support for good item response of item slope parameters and difficulty parameters. However, the ABCR-14 Likert scale might need to be explored from the MIRT point of view. Yet, as mentioned above, there is sufficient evidence to support that ABCR-14 has high validity and reliability. Conclusion The ABCR-14 demonstrates good psychometric properties for nursing belief conflict. Further studies are recommended to confirm its application in clinical practice. PMID:26247356
Rozen, Warren Matthew; Spychal, Robert T.; Hunter-Smith, David J.
2016-01-01
Background Accurate volumetric analysis is an essential component of preoperative planning in both reconstructive and aesthetic breast procedures towards achieving symmetrization and patient-satisfactory outcome. Numerous comparative studies and reviews of individual techniques have been reported. However, a unifying review of all techniques comparing their accuracy, reliability, and practicality has been lacking. Methods A review of the published English literature dating from 1950 to 2015 using databases, such as PubMed, Medline, Web of Science, and EMBASE, was undertaken. Results Since Bouman’s first description of water displacement method, a range of volumetric assessment techniques have been described: thermoplastic casting, direct anthropomorphic measurement, two-dimensional (2D) imaging, and computed tomography (CT)/magnetic resonance imaging (MRI) scans. However, most have been unreliable, difficult to execute and demonstrate limited practicability. Introduction of 3D surface imaging has revolutionized the field due to its ease of use, fast speed, accuracy, and reliability. However, its widespread use has been limited by its high cost and lack of high level of evidence. Recent developments have unveiled the first web-based 3D surface imaging program, 4D imaging, and 3D printing. Conclusions Despite its importance, an accurate, reliable, and simple breast volumetric analysis tool has been elusive until the introduction of 3D surface imaging technology. However, its high cost has limited its wide usage. Novel adjunct technologies, such as web-based 3D surface imaging program, 4D imaging, and 3D printing, appear promising. PMID:27047788
Concordance of DSM-IV Axis I and II diagnoses by personal and informant's interview.
Schneider, Barbara; Maurer, Konrad; Sargk, Dieter; Heiskel, Harald; Weber, Bernhard; Frölich, Lutz; Georgi, Klaus; Fritze, Jürgen; Seidler, Andreas
2004-06-30
The validity and reliability of using psychological autopsies to diagnose a psychiatric disorder is a critical issue. Therefore, interrater and test-retest reliability of the Structured Clinical Interview for DSM-IV Axis I and Personality Disorders and the usefulness of these instruments for the psychological autopsy method were investigated. Diagnoses by informant's interview were compared with diagnoses generated by a personal interview of 35 persons. Interrater reliability and test-retest reliability were assessed in 33 and 29 persons, respectively. Chi-square analysis, kappa and intraclass correlation coefficients, and Kendall's tau were used to determine agreement of diagnoses. Kappa coefficients were above 0.84 for substance-related disorders, mood disorders, and anxiety and adjustment disorders, and above 0.65 for Axis II disorders for interrater and test-retest reliability. Agreement by personal and relative's interview generated kappa coefficients above 0.79 for most Axis I and above 0.65 for most personality disorder diagnoses; Kendall's tau for dimensional individual personality disorder scores ranged from 0.22 to 0.72. Despite of a small number of psychiatric disorders in the selected population, the present results provide support for the validity of most diagnoses obtained through the best-estimate method using the Structured Clinical Interview for DSM-IV Axis I and Personality Disorders. This instrument can be recommended as a tool for the psychological autopsy procedure in post-mortem research. Copyright 2004 Elsevier Ireland Ltd.
34 CFR 462.11 - What must an application contain?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the methodology and procedures used to measure the reliability of the test. (h) Construct validity... previous test, and results from validity, reliability, and equating or standard-setting studies undertaken... NRS educational functioning levels (content validity). Documentation of the extent to which the items...
ERIC Educational Resources Information Center
Esch, Barbara E.; Carr, James E.; Grow, Laura L.
2009-01-01
Evidence to support stimulus-stimulus pairing (SSP) in speech acquisition is less than robust, calling into question the ability of SSP to reliably establish automatically reinforcing properties of speech and limiting the procedure's clinical utility for increasing vocalizations. We evaluated the effects of a modified SSP procedure on…
The purpose of this SOP is to establish a uniform procedure for the collection of yard composite soil samples in the field. This procedure was followed to ensure consistent and reliable collection of outdoor soil samples during the Arizona NHEXAS project and the "Border" study. ...
The purpose of this SOP is to establish a uniform procedure for the collection of residential foundation soil samples in the field. This procedure was followed to ensure consistent and reliable collection of outdoor soil samples during the Arizona NHEXAS project and the "Border"...
ERIC Educational Resources Information Center
Guess, Doug; And Others
Ten replication studies based on quantitative procedures developed to measure motor and sensory/motor skill acquisition among handicapped and nonhandicapped infants and children are presented. Each study follows the original assessment procedures, and emphasizes the stability of interobserver reliability across time, consistency in the response…
Knowing the operative game plan: a novel tool for the assessment of surgical procedural knowledge.
Balayla, Jacques; Bergman, Simon; Ghitulescu, Gabriela; Feldman, Liane S; Fraser, Shannon A
2012-08-01
What is the source of inadequate performance in the operating room? Is it a lack of technical skills, poor judgment or a lack of procedural knowledge? We created a surgical procedural knowledge (SPK) assessment tool and evaluated its use. We interviewed medical students, residents and training program staff on SPK assessment tools developed for 3 different common general surgery procedures: inguinal hernia repair with mesh in men, laparoscopic cholecystectomy and right hemicolectomy. The tools were developed as a step-wise assessment of specific surgical procedures based on techniques described in a current surgical text. We compared novice (medical student to postgraduate year [PGY]-2) and expert group (PGY-3 to program staff) scores using the Mann-Whitney U test. We calculated the total SPK score and defined a cut-off score using receiver operating characteristic analysis. In all, 5 participants in 7 different training groups (n = 35) underwent an interview. Median scores for each procedure and overall SPK scores increased with experience. The median SPK for novices was 54.9 (95% confidence interval [CI] 21.6-58.8) compared with 98.05 (95% CP 94.1-100.0) for experts (p = 0.012). The SPK cut-off score of 93.1 discriminates between novice and expert surgeons. Surgical procedural knowledge can reliably be assessed using our SPK assessment tool. It can discriminate between novice and expert surgeons for common general surgical procedures. Future studies are planned to evaluate its use for more complex procedures.
Barnabei, Agnese; Strigari, Lidia; Marchetti, Paolo; Sini, Valentina; De Vecchis, Liana; Corsello, Salvatore Maria; Torino, Francesco
2015-10-01
The assessment of ovarian reserve in premenopausal women requiring anticancer gonadotoxic therapy can help clinicians address some challenging issues, including the probability of future pregnancies after the end of treatment. Anti-Müllerian hormone (AMH) and age can reliably estimate ovarian reserve. A limited number of studies have evaluated AMH and age as predictors of residual ovarian reserve following cytotoxic chemotherapy in breast cancer patients. To conduct a meta-analysis of published data on this topic, we searched the medical literature using the key MeSH terms "amenorrhea/chemically induced," "ovarian reserve," "anti-Mullerian hormone/blood," and "breast neoplasms/drug therapy." Preferred Reporting Items for Systematic Reviews and Meta-Analyses statements guided the search strategy. U.K. National Health Service guidelines were used in abstracting data and assessing data quality and validity. Area under the receiver operating characteristic curve (ROC/AUC) analysis was used to evaluate the predictive utility of baseline AMH and age model. The meta-analysis of data pooled from the selected studies showed that both age and serum AMH are reliable predictors of post-treatment ovarian activity in breast cancer patients. Importantly, ROC/AUC analysis indicated AMH was a more reliable predictor of post-treatment ovarian activity in patients aged younger than 40 years (0.753; 95% confidence interval [CI]: 0.602-0.904) compared with those older than 40 years (0.678; 95% CI: 0.491-0.866). We generated a nomogram describing the correlations among age, pretreatment AMH serum levels, and ovarian activity at 1 year from the end of chemotherapy. After the ongoing validation process, the proposed nomogram may help clinicians discern premenopausal women requiring cytotoxic chemotherapy who should be considered high priority for fertility preservation counseling and procedures. ©AlphaMed Press.
Probabilistic Risk Assessment (PRA): A Practical and Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia L.; Ingegneri, Antonino J.; Djam, Melody
2006-01-01
The Lunar Reconnaissance Orbiter (LRO) is the first mission of the Robotic Lunar Exploration Program (RLEP), a space exploration venture to the Moon, Mars and beyond. The LRO mission includes spacecraft developed by NASA Goddard Space Flight Center (GSFC) and seven instruments built by GSFC, Russia, and contractors across the nation. LRO is defined as a measurement mission, not a science mission. It emphasizes the overall objectives of obtaining data to facilitate returning mankind safely to the Moon in preparation for an eventual manned mission to Mars. As the first mission in response to the President's commitment of the journey of exploring the solar system and beyond: returning to the Moon in the next decade, then venturing further into the solar system, ultimately sending humans to Mars and beyond, LRO has high-visibility to the public but limited resources and a tight schedule. This paper demonstrates how NASA's Lunar Reconnaissance Orbiter Mission project office incorporated reliability analyses in assessing risks and performing design tradeoffs to ensure mission success. Risk assessment is performed using NASA Procedural Requirements (NPR) 8705.5 - Probabilistic Risk Assessment (PRA) Procedures for NASA Programs and Projects to formulate probabilistic risk assessment (PRA). As required, a limited scope PRA is being performed for the LRO project. The PRA is used to optimize the mission design within mandated budget, manpower, and schedule constraints. The technique that LRO project office uses to perform PRA relies on the application of a component failure database to quantify the potential mission success risks. To ensure mission success in an efficient manner, low cost and tight schedule, the traditional reliability analyses, such as reliability predictions, Failure Modes and Effects Analysis (FMEA), and Fault Tree Analysis (FTA), are used to perform PRA for the large system of LRO with more than 14,000 piece parts and over 120 purchased or contractor built components.
Sim, Joong Hiong; Tong, Wen Ting; Hong, Wei-Han; Vadivelu, Jamuna; Hassan, Hamimah
2015-01-01
Assessment environment, synonymous with climate or atmosphere, is multifaceted. Although there are valid and reliable instruments for measuring the educational environment, there is no validated instrument for measuring the assessment environment in medical programs. This study aimed to develop an instrument for measuring students' perceptions of the assessment environment in an undergraduate medical program and to examine the psychometric properties of the new instrument. The Assessment Environment Questionnaire (AEQ), a 40-item, four-point (1=Strongly Disagree to 4=Strongly Agree) Likert scale instrument designed by the authors, was administered to medical undergraduates from the authors' institution. The response rate was 626/794 (78.84%). To establish construct validity, exploratory factor analysis (EFA) with principal component analysis and varimax rotation was conducted. To examine the internal consistency reliability of the instrument, Cronbach's α was computed. Mean scores for the entire AEQ and for each factor/subscale were calculated. Mean AEQ scores of students from different academic years and sex were examined. Six hundred and eleven completed questionnaires were analysed. EFA extracted four factors: feedback mechanism (seven items), learning and performance (five items), information on assessment (five items), and assessment system/procedure (three items), which together explained 56.72% of the variance. Based on the four extracted factors/subscales, the AEQ was reduced to 20 items. Cronbach's α for the 20-item AEQ was 0.89, whereas Cronbach's α for the four factors/subscales ranged from 0.71 to 0.87. Mean score for the AEQ was 2.68/4.00. The factor/subscale of 'feedback mechanism' recorded the lowest mean (2.39/4.00), whereas the factor/subscale of 'assessment system/procedure' scored the highest mean (2.92/4.00). Significant differences were found among the AEQ scores of students from different academic years. The AEQ is a valid and reliable instrument. Initial validation supports its use to measure students' perceptions of the assessment environment in an undergraduate medical program.
Lee, M R F; Tweed, J K S; Kim, E J; Scollan, N D
2012-12-01
When fractionation of meat lipids is not required, procedures such as saponification can be used to extract total fatty acids, reducing reliance on toxic organic compounds. However, saponification of muscle fatty acids is laborious, and requires extended heating times, and a second methylation step to convert the extracted fatty acids to fatty acid methyl esters prior to gas chromatography. Therefore the development of a more rapid direct methylation procedure would be of merit. The use of freeze-dried material for analysis is common and allows for greater homogenisation of the sample. The present study investigated the potential of using freeze-dried muscle samples and a direct bimethylation to analyse total fatty acids of meat (beef, chicken and lamb) in comparison with a saponification procedure followed by bimethylation. Both methods compared favourably for all major fatty acids measured. There was a minor difference in relation to the C18:1 trans 10 isomer with a greater (P<0.05) recovery with saponification. However, numerically the difference was small and likely as a result of approaching the limits of isomer identification by single column gas chromatography. Differences (P<0.001) between species were found for all fatty acids measured with no interaction effects. The described technique offers a simplified, quick and reliable alternative to saponification to analyse total fatty acids from muscle samples. Copyright © 2012 Elsevier Ltd. All rights reserved.
Frosini, Francesco; Miniati, Roberto; Grillone, Saverio; Dori, Fabrizio; Gentili, Guido Biffi; Belardinelli, Andrea
2016-11-14
The following study proposes and tests an integrated methodology involving Health Technology Assessment (HTA) and Failure Modes, Effects and Criticality Analysis (FMECA) for the assessment of specific aspects related to robotic surgery involving safety, process and technology. The integrated methodology consists of the application of specific techniques coming from the HTA joined to the aid of the most typical models from reliability engineering such as FMEA/FMECA. The study has also included in-site data collection and interviews to medical personnel. The total number of robotic procedures included in the analysis was 44: 28 for urology and 16 for general surgery. The main outcomes refer to the comparative evaluation between robotic, laparoscopic and open surgery. Risk analysis and mitigation interventions come from FMECA application. The small sample size available for the study represents an important bias, especially for the clinical outcomes reliability. Despite this, the study seems to confirm the better trend for robotics' surgical times with comparison to the open technique as well as confirming the robotics' clinical benefits in urology. More complex situation is observed for general surgery, where robotics' clinical benefits directly measured are the lowest blood transfusion rate.
Attachment techniques for high temperature strain
NASA Astrophysics Data System (ADS)
Wnuk, Steve P., Jr.
1993-01-01
Attachment methods for making resistive strain measurements to 2500 F were studied. A survey of available strain gages and attachment techniques was made, and the results are compiled for metal and carbon composite test materials. A theoretical analysis of strain transfer into a bonded strain gage was made, and the important physical parameters of the strain transfer medium, the ceramic matrix, were identified. A pull tester to measure pull-out tests on commonly used strain gage cements indicated that all cements tested displayed adequate strength for good strain transfer. Rokide flame sprayed coatings produced significantly stronger bonds than ceramic cements. An in-depth study of the flame spray process produced simplified installation procedures which also resulted in greater reliability and durability. Application procedures incorporating improvements made during this program are appended to the report. Strain gages installed on carbon composites, Rene' 41, 316 stainless steel, and TZM using attachment techniques developed during this program were successfully tested to 2500 F. Photographs of installation techniques, test procedures, and graphs of the test data are included in this report.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Reliability Prediction Approaches For Domestic Intelligent Electric Energy Meter Based on IEC62380
NASA Astrophysics Data System (ADS)
Li, Ning; Tong, Guanghua; Yang, Jincheng; Sun, Guodong; Han, Dongjun; Wang, Guixian
2018-01-01
The reliability of intelligent electric energy meter is a crucial issue considering its large calve application and safety of national intelligent grid. This paper developed a procedure of reliability prediction for domestic intelligent electric energy meter according to IEC62380, especially to identify the determination of model parameters combining domestic working conditions. A case study was provided to show the effectiveness and validation.
Osorio, Victoria; Schriks, Merijn; Vughs, Dennis; de Voogt, Pim; Kolkman, Annemieke
2018-08-15
A novel sample preparation procedure relying on Solid Phase Extraction (SPE) combining different sorbent materials on a sequential-based cartridge was optimized and validated for the enrichment of 117 widely diverse contaminants of emerging concern (CECs) from surface waters (SW) and further combined chemical and biological analysis on subsequent extracts. A liquid chromatography coupled to high resolution tandem mass spectrometry LC-(HR)MS/MS protocol was optimized and validated for the quantitative analysis of organic CECs in SW extracts. A battery of in vitro CALUX bioassays for the assessment of endocrine, metabolic and genotoxic interference and oxidative stress were performed on the same SW extracts. Satisfactory recoveries ([70-130]%) and precision (< 30%) were obtained for the majority of compounds tested. Internal standard calibration curves used for quantification of CECs, achieved the linearity criteria (r 2 > 0.99) over three orders of magnitude. Instrumental limits of detection and method limits of quantification were of [1-96] pg injected and [0.1-58] ng/L, respectively; while corresponding intra-day and inter-day precision did not exceed 11% and 20%. The developed procedure was successfully applied for the combined chemical and toxicological assessment of SW intended for drinking water supply. Levels of compounds varied from < 10 ng/L to < 500 ng/L. Endocrine (i.e. estrogenic and anti-androgenic) and metabolic interference responses were observed. Given the demonstrated reliability of the validated sample preparation method, the authors propose its integration in an effect-directed analysis procedure for a proper evaluation of SW quality and hazard assessment of CECs. Copyright © 2018 Elsevier B.V. All rights reserved.
Prevalidation in pharmaceutical analysis. Part I. Fundamentals and critical discussion.
Grdinić, Vladimir; Vuković, Jadranka
2004-05-28
A complete prevalidation, as a basic prevalidation strategy for quality control and standardization of analytical procedure was inaugurated. Fast and simple, the prevalidation methodology based on mathematical/statistical evaluation of a reduced number of experiments (N < or = 24) was elaborated and guidelines as well as algorithms were given in detail. This strategy has been produced for the pharmaceutical applications and dedicated to the preliminary evaluation of analytical methods where linear calibration model, which is very often occurred in practice, could be the most appropriate to fit experimental data. The requirements presented in this paper should therefore help the analyst to design and perform the minimum number of prevalidation experiments needed to obtain all the required information to evaluate and demonstrate the reliability of its analytical procedure. In complete prevalidation process, characterization of analytical groups, checking of two limiting groups, testing of data homogeneity, establishment of analytical functions, recognition of outliers, evaluation of limiting values and extraction of prevalidation parameters were included. Moreover, system of diagnosis for particular prevalidation step was suggested. As an illustrative example for demonstration of feasibility of prevalidation methodology, among great number of analytical procedures, Vis-spectrophotometric procedure for determination of tannins with Folin-Ciocalteu's phenol reagent was selected. Favourable metrological characteristics of this analytical procedure, as prevalidation figures of merit, recognized the metrological procedure as a valuable concept in preliminary evaluation of quality of analytical procedures.
NASA Astrophysics Data System (ADS)
Nair, Nirmal-Kumar
As open access market principles are applied to power systems, significant changes are happening in their planning, operation and control. In the emerging marketplace, systems are operating under higher loading conditions as markets focus greater attention to operating costs than stability and security margins. Since operating stability is a basic requirement for any power system, there is need for newer tools to ensure stability and security margins being strictly enforced in the competitive marketplace. This dissertation investigates issues associated with incorporating voltage security into the unbundled operating environment of electricity markets. It includes addressing voltage security in the monitoring, operational and planning horizons of restructured power system. This dissertation presents a new decomposition procedure to estimate voltage security usage by transactions. The procedure follows physical law and uses an index that can be monitored knowing the state of the system. The expression derived is based on composite market coordination models that have both PoolCo and OpCo transactions, in a shared stressed transmission grid. Our procedure is able to equitably distinguish the impacts of individual transactions on voltage stability, at load buses, in a simple and fast manner. This dissertation formulates a new voltage stability constrained optimal power flow (VSCOPF) using a simple voltage security index. In modern planning, composite power system reliability analysis that encompasses both adequacy and security issues is being developed. We have illustrated the applicability of our VSCOPF into composite reliability analysis. This dissertation also delves into the various applications of voltage security index. Increasingly, FACT devices are being used in restructured markets to mitigate a variety of operational problems. Their control effects on voltage security would be demonstrated using our VSCOPF procedure. Further, this dissertation investigates the application of steady state voltage stability index to detect potential dynamic voltage collapse. Finally, this dissertation examines developments in representation, standardization, communication and exchange of power system data. Power system data is the key input to all analytical engines for system operation, monitoring and control. Data exchange and dissemination could impact voltage security evaluation and therefore needs to be critically examined.
[Pressure distribution measurements during use of wheelchairs].
Meiners, T; Friedrich, G; Krüger, A; Böhm, V
2001-04-01
There is a growing number of mobility-impaired and wheelchair-dependent patients caused by diseases and injuries of the central nervous system. The risk is high for pressure sores to develop due to disturbances of the motor, sensory, and autonomic nervous system. Numerous seating systems for prophylaxis and treatment of decubitus ulcer are available. To identify risk parameters, the literature on animal experiments regarding pressure ulcers was reviewed. A study on the reproducibility of the analysis method with capacitive sensors tested in ten paraplegics with 470 measurements is presented. It shows the reliability of the procedure.
NASA Astrophysics Data System (ADS)
Sessa, Francesco; D'Angelo, Paola; Migliorati, Valentina
2018-01-01
In this work we have developed an analytical procedure to identify metal ion coordination geometries in liquid media based on the calculation of Combined Distribution Functions (CDFs) starting from Molecular Dynamics (MD) simulations. CDFs provide a fingerprint which can be easily and unambiguously assigned to a reference polyhedron. The CDF analysis has been tested on five systems and has proven to reliably identify the correct geometries of several ion coordination complexes. This tool is simple and general and can be efficiently applied to different MD simulations of liquid systems.
NASA Technical Reports Server (NTRS)
Spanos, P. D.; Cao, T. T.; Hamilton, D. A.; Nelson, D. A. R.
1989-01-01
An efficient method for the load analysis of Shuttle-payload systems with linear or nonlinear attachment interfaces is presented which allows the kinematics of the interface degrees of freedom at a given time to be evaluated without calculating the combined system modal representation of the Space Shuttle and its payload. For the case of a nonlinear dynamic model, an iterative procedure is employed to converge the nonlinear terms of the equations of motion to reliable values. Results are presented for a Shuttle abort landing event.
NASA Astrophysics Data System (ADS)
Schmitt, Kara Anne
This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to the problems facing the industry include in-depth, multiple fault failure training which tests the operator's knowledge of the situation. This builds operator collaboration, competence and confidence to know what to do, and when to do it in response to an emergency situation. Strict adherence to procedures and rigid compliance to process may not prevent incidents or increase safety; building operators' fundamental skills of collaboration, competence and confidence will.
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Applicability. 712.2 Section 712.2 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program General Provisions § 712.2 Applicability. The HRP applies to all applicants for, or current employees of...
10 CFR 712.16 - DOE security review.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false DOE security review. 712.16 Section 712.16 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... part. (c) Any mental/personality disorder or behavioral issues found in a personnel security file...
ERIC Educational Resources Information Center
Lyons, Kevin J.; Greening, Shirley; Robeson, Mary
2000-01-01
A modified Delphi procedure assessed the content validity of accreditation standards for cardiovascular technologists, cytotechnologists, medical sonographers, electroneurodiagnostic technologists, medical assistants, perfusionists, physician assistants, and surgical technologists. Although validity and reliability were extremely high, some…
DOT National Transportation Integrated Search
2016-06-01
Load and Resistance Factor Rating (LRFR) is a reliability-based rating procedure complementary to Load and Resistance Factor Design (LRFD). The intent of LRFR is to provide consistent reliability for all bridges regardless of in-situ condition. The p...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2012-01-01 2012-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2013-01-01 2013-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2010-01-01 2010-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2011-01-01 2011-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2014-01-01 2014-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
Stoffels, I; Dissemond, J; Körber, A; Hillen, U; Poeppel, T; Schadendorf, D; Klode, J
2011-03-01
Sentinel lymph node excision (SLNE) for the detection of regional nodal metastases and staging of malignant melanoma has resulted in some controversies in international discussions, as it is a cost-intensive surgical intervention with potentially significant morbidity. The present retrospective study seeks to clarify the effectiveness and reliability of SLNE performed under tumescent local anaesthesia (TLA) and whether SLNE performed under TLA can reduce costs and morbidity. Therefore, our study is a comparison of SLNE performed under TLA and general anaesthesia (GA). We retrospectively analysed data from 300 patients with primary malignant melanoma with a Breslow index of ≥1.0 mm. Altogether, 211 (70.3%) patients underwent SLNE under TLA and 89 (29.7%) patients underwent SLNE under GA. A total of 637 sentinel lymph nodes (SLN) were removed. In the TLA group 1.98 SLN/patient and in the GA group 2.46 SLN/patient were removed (median value). Seventy patients (23.3%) had a positive SLN. No major complications occurred. The costs for SLNE were significantly less for the SLNE in a procedures room performed under TLA (mean € 30.64) compared with SLNE in an operating room under GA (mean € 326.14, P<0.0001). In conclusion, SLNE performed under TLA is safe, reliable, and cost-efficient and could become the new gold standard in sentinel lymph node diagnostic procedures. © 2010 The Authors. Journal of the European Academy of Dermatology and Venereology © 2010 European Academy of Dermatology and Venereology.
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
Self-Motion Perception: Assessment by Real-Time Computer Generated Animations
NASA Technical Reports Server (NTRS)
Parker, Donald E.
1999-01-01
Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?
Direct labeling of serum proteins by fluorescent dye for antibody microarray.
Klimushina, M V; Gumanova, N G; Metelskaya, V A
2017-05-06
Analysis of serum proteome by antibody microarray is used to identify novel biomarkers and to study signaling pathways including protein phosphorylation and protein-protein interactions. Labeling of serum proteins is important for optimal performance of the antibody microarray. Proper choice of fluorescent label and optimal concentration of protein loaded on the microarray ensure good quality of imaging that can be reliably scanned and processed by the software. We have optimized direct serum protein labeling using fluorescent dye Arrayit Green 540 (Arrayit Corporation, USA) for antibody microarray. Optimized procedure produces high quality images that can be readily scanned and used for statistical analysis of protein composition of the serum. Copyright © 2017 Elsevier Inc. All rights reserved.
Sampling and analysis techniques for monitoring serum for trace elements.
Ericson, S P; McHalsky, M L; Rabinow, B E; Kronholm, K G; Arceo, C S; Weltzer, J A; Ayd, S W
1986-07-01
We describe techniques for controlling contamination in the sampling and analysis of human serum for trace metals. The relatively simple procedures do not require clean-room conditions. The atomic absorption and atomic emission methods used have been applied in studying zinc, copper, chromium, manganese, molybdenum, selenium, and aluminum concentrations. Values obtained for a group of 16 normal subjects agree with the most reliable values reported in the literature, obtained by much more elaborate techniques. All of these metals can be measured in 3 to 4 mL of serum. The methods may prove especially useful in monitoring concentrations of essential trace elements in blood of patients being maintained on total parenteral nutrition.
Chericoni, Silvio; Stefanelli, Fabio; Da Valle, Ylenia; Giusiani, Mario
2015-09-01
A sensitive and reliable method for extraction and quantification of benzoylecgonine (BZE) and cocaine (COC) in urine is presented. Propyl-chloroformate was used as derivatizing agent, and it was directly added to the urine sample: the propyl derivative and COC were then recovered by liquid-liquid extraction procedure. Gas chromatography-mass spectrometry was used to detect the analytes in selected ion monitoring mode. The method proved to be precise for BZE and COC both in term of intraday and interday analysis, with a coefficient of variation (CV)<6%. Limits of detection (LOD) were 2.7 ng/mL for BZE and 1.4 ng/mL for COC. The calibration curve showed a linear relationship for BZE and COC (r2>0.999 and >0.997, respectively) within the range investigated. The method, applied to thirty authentic samples, showed to be very simple, fast, and reliable, so it can be easily applied in routine analysis for the quantification of BZE and COC in urine samples. © 2015 American Academy of Forensic Sciences.
Chatterji, Madhabi
2002-01-01
This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.
NASA reliability preferred practices for design and test
NASA Technical Reports Server (NTRS)
1991-01-01
Given here is a manual that was produced to communicate within the aerospace community design practices that have contributed to NASA mission success. The information represents the best technical advice that NASA has to offer on reliability design and test practices. Topics covered include reliability practices, including design criteria, test procedures, and analytical techniques that have been applied to previous space flight programs; and reliability guidelines, including techniques currently applied to space flight projects, where sufficient information exists to certify that the technique will contribute to mission success.
Crespo-Eguilaz, N; Magallon, S; Sanchez-Carpintero, R; Narbona, J
2016-01-01
The Children's Communication Checklist (CCC) by Bishop is a useful scale for evaluation of pragmatic verbal abilities in school children. The aim of the study is to ascertain the validity and reliability of the CCC in Spanish. Answers to the CCC items by parents of 360 children with normal intelligence were analyzed. There were five groups: 160 control children; 68 children with attention deficit hyperactivity disorder, 77 with procedural non-verbal disorder, 25 children with social communication disorder and 30 with autism spectrum disorder. Investigations included: factorial analysis in order to cluster checklist items, reliability analyses of the proposed scales and discriminant analysis to check whether the scale correctly classifies children with pragmatic verbal abilities. Seven factors were obtained (Kaiser-Meyer-Olkin: 0.852) with moderate similarity with those of the original scale: social relationships, interests, and five more that can be grouped into pragmatic verbal ability (conversational abilities, coherence-comprehension, empathy nonverbal communication and appropriateness). All factors are significantly correlated with each other in the control group, and the five that compose pragmatic verbal ability correlate with each other in the clinical groups (Pearson r). The scales have good reliability (Cronbach's alpha: 0.914). The questionnaire correctly classifies 98.9% of grouped cases with and without pragmatic disorder and 78% of subjects in their appropriate clinical group. Besides, the questionnaire allows to differentiate the pathologies according to the presence and intensity of the symptoms. This Spanish version of the CCC is highly valid and reliable. The proposed statistics can be used as normative-reference values.
Psychometric Properties of the Persian Translation of the Sexual Quality of Life–Male Questionnaire
Maasoumi, Raziyeh; Mokarami, Hamidreza; Nazifi, Morteza; Stallones, Lorann; Taban, Abrahim; Yazdani Aval, Mohsen; Samimi, Kazem
2016-01-01
Sexual dysfunction has been demonstrated to be related to a poor quality of life. These dysfunctions are especially prevalent among men. This cross-sectional study aimed to investigate the psychometric properties of the Persian translation of the Sexual Quality of Life–Male (SQOL-M), translated and adapted to measure sexual quality of life among Iranian men. Forward–backward procedures were applied in translating the original SQOL-M into Persian, and then the psychometric properties of the Persian translation of the SQOL-M were studied. A total of 181 participants (23-60 years old) were included in the study. Validity was assessed by construct validity using confirmatory factor analysis, convergent validity, and content validity. The international index of erectile function (IIEF) and the work ability index were used to study the convergent validity. Reliability was evaluated through internal consistency and test–retest reliability analyses. The results from confirmatory factor analysis confirmed a one-factor solution for the Persian version of the SQOL-M. Content validity of the translated measure was endorsed by 10 specialists. Pearson correlations indicated that work ability index score, dimensions of the IIEF, and the IIEF total score were positively correlated with the Persian version of the SQOL-M (p < .001). Reliability evaluation indicated a high internal consistency and test–retest reliability. The Cronbach’s alpha coefficient and intraclass correlation coefficients were .96 and .95, respectively. Results indicated that the Persian version of the SQOL-M has good to excellent psychometric properties and can be used to assess the sexual quality of life among Iranian men. PMID:26856758
Psychometric Properties of the Persian Translation of the Sexual Quality of Life-Male Questionnaire.
Maasoumi, Raziyeh; Mokarami, Hamidreza; Nazifi, Morteza; Stallones, Lorann; Taban, Abrahim; Yazdani Aval, Mohsen; Samimi, Kazem
2017-05-01
Sexual dysfunction has been demonstrated to be related to a poor quality of life. These dysfunctions are especially prevalent among men. This cross-sectional study aimed to investigate the psychometric properties of the Persian translation of the Sexual Quality of Life-Male (SQOL-M), translated and adapted to measure sexual quality of life among Iranian men. Forward-backward procedures were applied in translating the original SQOL-M into Persian, and then the psychometric properties of the Persian translation of the SQOL-M were studied. A total of 181 participants (23-60 years old) were included in the study. Validity was assessed by construct validity using confirmatory factor analysis, convergent validity, and content validity. The international index of erectile function (IIEF) and the work ability index were used to study the convergent validity. Reliability was evaluated through internal consistency and test-retest reliability analyses. The results from confirmatory factor analysis confirmed a one-factor solution for the Persian version of the SQOL-M. Content validity of the translated measure was endorsed by 10 specialists. Pearson correlations indicated that work ability index score, dimensions of the IIEF, and the IIEF total score were positively correlated with the Persian version of the SQOL-M ( p < .001). Reliability evaluation indicated a high internal consistency and test-retest reliability. The Cronbach's alpha coefficient and intraclass correlation coefficients were .96 and .95, respectively. Results indicated that the Persian version of the SQOL-M has good to excellent psychometric properties and can be used to assess the sexual quality of life among Iranian men.
Nedjat, Saharnaz; Montazeri, Ali; Holakouie, Kourosh; Mohammad, Kazem; Majdzadeh, Reza
2008-03-21
The objective of the current study was to translate and validate the Iranian version of the WHOQOL-BREF. A forward-backward translation procedure was followed to develop the Iranian version of the questionnaire. A stratified random sample of individuals aged 18 and over completed the questionnaire in Tehran, Iran. Psychometric properties of the instrument including reliability (internal consistency, and test-retest analysis), validity (known groups' comparison and convergent validity), and items' correlation with their hypothesized domains were assessed. In all 1164 individuals entered into the study. The mean age of the participants was 36.6 (SD = 13.2) years, and the mean years of their formal education was 10.7 (SD = 4.4). In general the questionnaire received well and all domains met the minimum reliability standards (Cronbach's alpha and intra-class correlation > 0.7), except for social relationships (alpha = 0.55). Performing known groups' comparison analysis, the results indicated that the questionnaire discriminated well between subgroups of the study samples differing in their health status. Since the WHOQOL-BREF demonstrated statistically significant correlation with the Iranian version of the SF-36 as expected, the convergent validity of the questionnaire was found to be desirable. Correlation matrix also showed satisfactory results in all domains except for social relationships. This study has provided some preliminary evidence of the reliability and validity of the WHOQOL-BREF to be used in Iran, though further research is required to challenge the problems of reliability in one of the dimensions and the instrument's factor structure.
Bartolazzi, Armando; Bellotti, Carlo; Sciacchitano, Salvatore
2012-01-01
In the last decade, the β-galactosyl binding protein galectin-3 has been the object of extensive molecular, structural, and functional studies aimed to clarify its biological role in cancer. Multicenter studies also contributed to discover the potential clinical value of galectin-3 expression analysis in distinguishing, preoperatively, benign from malignant thyroid nodules. As a consequence galectin-3 is receiving significant attention as tumor marker for thyroid cancer diagnosis, but some conflicting results mostly owing to methodological problems have been published. The possibility to apply preoperatively a reliable galectin-3 test method on fine needle aspiration biopsy (FNA)-derived thyroid cells represents an important achievement. When correctly applied, the method reduces consistently the gray area of thyroid FNA cytology, contributing to avoid unnecessary thyroid surgery. Although the efficacy and reliability of the galectin-3 test method have been extensively proved in several studies, its translation in the clinical setting requires well-standardized reagents and procedures. After a decade of experimental work on galectin-3-related basic and translational research projects, the major methodological problems that may potentially impair the diagnostic performance of galectin-3 immunotargeting are highlighted and discussed in detail. A standardized protocol for a reliable galectin-3 expression analysis is finally provided. The aim of this contribution is to improve the clinical management of patients with thyroid nodules, promoting the preoperative use of a reliable galectin-3 test method as ancillary technique to conventional thyroid FNA cytology. The final goal is to decrease unnecessary thyroid surgery and its related social costs.
Lin, Shike; Chaiear, Naesinee; Khiewyoo, Jiraporn; Wu, Bin; Johns, Nutjaree Pratheepawanit
2013-03-01
As quality of work-life (QWL) among nurses affects both patient care and institutional standards, assessment regarding QWL for the profession is important. Work-related Quality of Life Scale (WRQOLS) is a reliable QWL assessment tool for the nursing profession. To develop a Chinese version of the WRQOLS-2 and to examine its psychometric properties as an instrument to assess QWL for the nursing profession in China. Forward and back translating procedures were used to develop the Chinese version of WRQOLS-2. Six nursing experts participated in content validity evaluation and 352 registered nurses (RNs) participated in the tests. After a two-week interval, 70 of the RNs were retested. Structural validity was examined by principal components analysis and the Cronbach's alphas calculated. The respective independent sample t-test and intra-class correlation coefficient were used to analyze known-group validity and test-retest reliability. One item was rephrased for adaptation to Chinese organizational cultures. The content validity index of the scale was 0.98. Principal components analysis resulted in a seven-factor model, accounting for 62% of total variance, with Cronbach's alphas for subscales ranging from 0.71 to 0.88. Known-group validity was established in the assessment results of the participants in permanent employment vs. contract employment (t = 2.895, p < 0.01). Good test-retest reliability was observed (r = 0.88, p < 0.01). The translated Chinese version of the WRQOLS-2 has sufficient validity and reliability so that it can be used to evaluate the QWL among nurses in mainland China.
Electronic device for endosurgical skills training (EDEST): study of reliability.
Pagador, J B; Uson, J; Sánchez, M A; Moyano, J L; Moreno, J; Bustos, P; Mateos, J; Sánchez-Margallo, F M
2011-05-01
Minimally Invasive Surgery procedures are commonly used in many surgical practices, but surgeons need specific training models and devices due to its difficulty and complexity. In this paper, an innovative electronic device for endosurgical skills training (EDEST) is presented. A study on reliability for this device was performed. Different electronic components were used to compose this new training device. The EDEST was focused on two basic laparoscopic tasks: triangulation and coordination manoeuvres. A configuration and statistical software was developed to complement the functionality of the device. A calibration method was used to assure the proper work of the device. A total of 35 subjects (8 experts and 27 novices) were used to check the reliability of the system using the MTBF analysis. Configuration values for triangulation and coordination exercises were calculated as 0.5 s limit threshold and 800-11,000 lux range of light intensity, respectively. Zero errors in 1,050 executions (0%) for triangulation and 21 errors in 5,670 executions (0.37%) for coordination were obtained. A MTBF of 2.97 h was obtained. The results show that the reliability of the EDEST device is acceptable when used under previously defined light conditions. These results along with previous work could demonstrate that the EDEST device can help surgeons during first training stages.
Siu, B W M; Au-Yeung, C C Y; Chan, A W L; Chan, L S Y; Yuen, K K; Leung, H W; Yan, C K; Ng, K K; Lai, A C H; Davies, S; Collins, M
Mapping forensic psychiatric services with the security needs of patients is a salient step in service planning, audit and review. A valid and reliable instrument for measuring the security needs of Chinese forensic psychiatric inpatients was not yet available. This study aimed to develop and validate the Chinese version of the Security Needs Assessment Profile for measuring the profiles of security needs of Chinese forensic psychiatric inpatients. The Security Needs Assessment Profile by Davis was translated into Chinese. Its face validity, content validity, construct validity and internal consistency reliability were assessed by measuring the security needs of 98 Chinese forensic psychiatric inpatients. Principal factor analysis for construct validity provided a six-factor security needs model explaining 68.7% of the variance. Based on the Cronbach's alpha coefficient, the internal consistency reliability was rated as acceptable for procedural security (0.73), and fair for both physical security (0.62) and relational security (0.58). A significant sex difference (p=0.002) in total security score was found. The Chinese version of the Security Needs Assessment Profile is a valid and reliable instrument for assessing the security needs of Chinese forensic psychiatric inpatients. Copyright © 2017 Elsevier Ltd. All rights reserved.
The purpose of this SOP is to establish a uniform procedure for the collection of yard composite soil samples in the field. This procedure was followed to ensure consistent and reliable collection of outdoor soil samples during the Arizona NHEXAS project and the Border study. Ke...
Preparation of titanium oxide ceramic membranes
Anderson, Marc A.; Xu, Qunyin
1992-01-01
A procedure is disclosed for the reliable production of either particulate or polymeric titanium ceramic membranes by a highly constrained sol-gel procedure. The critical constraints in the procedure include the choice of alkyl alcohol solvent, the amount of water and its rate of addition, the pH of the solution during hydrolysis, and the limit of sintering temperature applied to the resulting gels.
Preparation of titanium oxide ceramic membranes
Anderson, M.A.; Xu, Q.
1992-03-17
A procedure is disclosed for the reliable production of either particulate or polymeric titanium ceramic membranes by a highly constrained sol-gel procedure. The critical constraints in the procedure include the choice of alkyl alcohol solvent, the amount of water and its rate of addition, the pH of the solution during hydrolysis, and the limit of sintering temperature applied to the resulting gels.
ERIC Educational Resources Information Center
Harrison, Justin; McKay, Ryan
2012-01-01
Temporal discounting rates have become a popular dependent variable in social science research. While choice procedures are commonly employed to measure discounting rates, equivalent present value (EPV) procedures may be more sensitive to experimental manipulation. However, their use has been impeded by the absence of test-retest reliability data.…
Code of Federal Regulations, 2010 CFR
2010-10-01
... and/or distribution service, quality assurance, system reliability, system operation and maintenance... CONTRACTING ACQUISITION OF UTILITY SERVICES Acquiring Utility Services 41.202 Procedures. (a) Prior to executing a utility service contract, the contracting officer shall comply with parts 6 and 7 and 41.201 (d...
Effectiveness of Visual Methods in Information Procedures for Stem Cell Recipients and Donors
Sarıtürk, Çağla; Gereklioğlu, Çiğdem; Korur, Aslı; Asma, Süheyl; Yeral, Mahmut; Solmaz, Soner; Büyükkurt, Nurhilal; Tepebaşı, Songül; Kozanoğlu, İlknur; Boğa, Can; Özdoğu, Hakan
2017-01-01
Objective: Obtaining informed consent from hematopoietic stem cell recipients and donors is a critical step in the transplantation process. Anxiety may affect their understanding of the provided information. However, use of audiovisual methods may facilitate understanding. In this prospective randomized study, we investigated the effectiveness of using an audiovisual method of providing information to patients and donors in combination with the standard model. Materials and Methods: A 10-min informational animation was prepared for this purpose. In total, 82 participants were randomly assigned to two groups: group 1 received the additional audiovisual information and group 2 received standard information. A 20-item questionnaire was administered to participants at the end of the informational session. Results: A reliability test and factor analysis showed that the questionnaire was reliable and valid. For all participants, the mean overall satisfaction score was 184.8±19.8 (maximum possible score of 200). However, for satisfaction with information about written informed consent, group 1 scored significantly higher than group 2 (p=0.039). Satisfaction level was not affected by age, education level, or differences between the physicians conducting the informative session. Conclusion: This study shows that using audiovisual tools may contribute to a better understanding of the informed consent procedure and potential risks of stem cell transplantation. PMID:27476890
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, G.K.P., E-mail: Graham.Muir@glasgow.ac.uk; Hayward, S.; Tripney, B.G.
2015-01-15
Highlights: • Compares industry standard and {sup 14}C methods for determining bioenergy content of MSW. • Differences quantified through study at an operational energy from waste plant. • Manual sort and selective dissolution are unreliable measures of feedstock bioenergy. • {sup 14}C methods (esp. AMS) improve precision and reliability of bioenergy determination. • Implications for electricity generators and regulators for award of bio-incentives. - Abstract: {sup 14}C analysis of flue gas by accelerator mass spectrometry (AMS) and liquid scintillation counting (LSC) were used to determine the biomass fraction of mixed waste at an operational energy-from-waste (EfW) plant. Results were convertedmore » to bioenergy (% total) using mathematical algorithms and assessed against existing industry methodologies which involve manual sorting and selective dissolution (SD) of feedstock. Simultaneous determinations using flue gas showed excellent agreement: 44.8 ± 2.7% for AMS and 44.6 ± 12.3% for LSC. Comparable bioenergy results were obtained using a feedstock manual sort procedure (41.4%), whilst a procedure based on selective dissolution of representative waste material is reported as 75.5% (no errors quoted). {sup 14}C techniques present significant advantages in data acquisition, precision and reliability for both electricity generator and industry regulator.« less
2012-01-01
Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples. PMID:23050842
Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria
2012-10-10
Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples.
Reliability and Validity of 10 Different Standard Setting Procedures.
ERIC Educational Resources Information Center
Halpin, Glennelle; Halpin, Gerald
Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…
47 CFR 101.75 - Involuntary relocation procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering, equipment, site and FCC fees, as well as any legitimate and prudent transaction expenses incurred... reliability of their system. For digital data systems, reliability is measured by the percent of time the bit error rate (BER) exceeds a desired value, and for analog or digital voice transmissions, it is measured...
40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Solubilities and Octanol-Water Partition Coefficients of Hydrophobic Substances,” Journal of Research of the...
40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Solubilities and Octanol-Water Partition Coefficients of Hydrophobic Substances,” Journal of Research of the...
34 CFR 668.144 - Application for test approval.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the comparability of scores on the current test to scores on the previous test, and data from validity... explanation of the methodology and procedures for measuring the reliability of the test; (ii) Evidence that different forms of the test, including, if applicable, short forms, are comparable in reliability; (iii...
Reliability and Validity of Curriculum-Based Informal Reading Inventories.
ERIC Educational Resources Information Center
Fuchs, Lynn; And Others
A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…
Reliability in fiber optic cable harness manufacturing
NASA Astrophysics Data System (ADS)
McCoy, Bruce M.
Key aspects of manufacturing cable harnesses for aircraft and spacecraft that incorporate optical fiber/cables along with traditional wiring are discussed. Issues regarding feasibility of automation of assembly processes, manual assembly, testing, installation, quality assurance, reliability and maintainability are addressed. Training procedures, formal training programs, and their results are reviewed.
Evaluation of Three Pain Assessment Scales Used for Ventilated Neonates.
Huang, Xiao-Zhi; Li, Li; Zhou, Jun; He, Fang; Zhong, Chun-Xia; Wang, Bin
2018-06-26
To compare and evaluate the reliability, validity, feasibility, clinical utility, and nurses' preference of the Premature Infant Pain Profile-Revised (PIPP-R), the Neonatal Pain, Agitation, and Sedation Scale (N-PASS), and the Neonatal Infant Acute Pain Assessment Scale (NIAPAS) used for procedural pain in ventilated neonates. Procedural pain is a common phenomenon but is undermanaged and underassessed in hospitalized neonates. Information for clinician selecting pain measurements to improve neonatal care and outcomes are still limited. A prospective observational study and adheres to the relevant EQUATOR guidelines. A total of 1080 pain assessments were made at 90 neonates by two nurses independently, using three scales viewing three phases of videotaped painful (arterial blood sampling) and non-painful procedures (diaper change). Internal consistency, inter-rater reliability, discriminant validity, concurrent validity and convergent validity of scales were analyzed. Feasibility, clinical utility, and nurses' preference of scales were also investigated. All three scales showed excellent inter-raters coefficients (from 0.991 to 0.992) and good internal consistency (0.733 for the PIPP-R, 0.837 for the N-PASS and 0.836 for the NIAPAS, respectively). Scores of painful and nonpainful procedures on the three scales changed significantly across the phases. There was a strong correlation between the three scales with adequate limits of agreement. The mean scores of the N-PASS for feasibility and utility were significantly higher than those of the NIAPAS, but not significantly higher than those of the PIPP-R. The N-PASS was mostly preferred by 55.9% of the nurses, followed by the NIAPAS (23.5%) and the PIPP-R (20.6%). The three scales are all reliable and valid, but the N-PASS and the NIAPAS performs better in reliability. The N-PASS appears to be a better choice for frontier nurses to assess procedural pain in ventilated neonates based on its good feasibility, utility, and nurses' preference. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Woodham, W.M.
1982-01-01
This report provides results of reliability and cost-effective studies of the goes satellite data-collection system used to operate a small hydrologic data network in west-central Florida. The GOES system, in its present state of development, was found to be about as reliable as conventional methods of data collection. Benefits of using the GOES system include some cost and manpower reduction, improved data accuracy, near real-time data availability, and direct computer storage and analysis of data. The GOES system could allow annual manpower reductions of 19 to 23 percent with reduction in cost for some and increase in cost for other single-parameter sites, such as streamflow, rainfall, and ground-water monitoring stations. Manpower reductions of 46 percent or more appear possible for multiple-parameter sites. Implementation of expected improvements in instrumentation and data handling procedures should further reduce costs. (USGS)
A third-order approximation method for three-dimensional wheel-rail contact
NASA Astrophysics Data System (ADS)
Negretti, Daniele
2012-03-01
Multibody train analysis is used increasingly by railway operators whenever a reliable and time-efficient method to evaluate the contact between wheel and rail is needed; particularly, the wheel-rail contact is one of the most important aspects that affects a reliable and time-efficient vehicle dynamics computation. The focus of the approach proposed here is to carry out such tasks by means of online wheel-rail elastic contact detection. In order to improve efficiency and save time, a main analytical approach is used for the definition of wheel and rail surfaces as well as for contact detection, then a final numerical evaluation is used to locate contact. The final numerical procedure consists in finding the zeros of a nonlinear function in a single variable. The overall method is based on the approximation of the wheel surface, which does not influence the contact location significantly, as shown in the paper.
NASA Technical Reports Server (NTRS)
Bloomquist, C. E.; Kallmeyer, R. H.
1972-01-01
Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.
Automatized set-up procedure for transcranial magnetic stimulation protocols.
Harquel, S; Diard, J; Raffin, E; Passera, B; Dall'Igna, G; Marendaz, C; David, O; Chauvin, A
2017-06-01
Transcranial Magnetic Stimulation (TMS) established itself as a powerful technique for probing and treating the human brain. Major technological evolutions, such as neuronavigation and robotized systems, have continuously increased the spatial reliability and reproducibility of TMS, by minimizing the influence of human and experimental factors. However, there is still a lack of efficient set-up procedure, which prevents the automation of TMS protocols. For example, the set-up procedure for defining the stimulation intensity specific to each subject is classically done manually by experienced practitioners, by assessing the motor cortical excitability level over the motor hotspot (HS) of a targeted muscle. This is time-consuming and introduces experimental variability. Therefore, we developed a probabilistic Bayesian model (AutoHS) that automatically identifies the HS position. Using virtual and real experiments, we compared the efficacy of the manual and automated procedures. AutoHS appeared to be more reproducible, faster, and at least as reliable as classical manual procedures. By combining AutoHS with robotized TMS and automated motor threshold estimation methods, our approach constitutes the first fully automated set-up procedure for TMS protocols. The use of this procedure decreases inter-experimenter variability while facilitating the handling of TMS protocols used for research and clinical routine. Copyright © 2017 Elsevier Inc. All rights reserved.
Deib, Gerard; Johnson, Alex; Unberath, Mathias; Yu, Kevin; Andress, Sebastian; Qian, Long; Osgood, Gregory; Navab, Nassir; Hui, Ferdinand; Gailloud, Philippe
2018-05-30
Optical see-through head mounted displays (OST-HMDs) offer a mixed reality (MixR) experience with unhindered procedural site visualization during procedures using high resolution radiographic imaging. This technical note describes our preliminary experience with percutaneous spine procedures utilizing OST-HMD as an alternative to traditional angiography suite monitors. MixR visualization was achieved using the Microsoft HoloLens system. Various spine procedures (vertebroplasty, kyphoplasty, and percutaneous discectomy) were performed on a lumbar spine phantom with commercially available devices. The HMD created a real time MixR environment by superimposing virtual posteroanterior and lateral views onto the interventionalist's field of view. The procedures were filmed from the operator's perspective. Videos were reviewed to assess whether key anatomic landmarks and materials were reliably visualized. Dosimetry and procedural times were recorded. The operator completed a questionnaire following each procedure, detailing benefits, limitations, and visualization mode preferences. Percutaneous vertebroplasty, kyphoplasty, and discectomy procedures were successfully performed using OST-HMD image guidance on a lumbar spine phantom. Dosimetry and procedural time compared favorably with typical procedural times. Conventional and MixR visualization modes were equally effective in providing image guidance, with key anatomic landmarks and materials reliably visualized. This preliminary study demonstrates the feasibility of utilizing OST-HMDs for image guidance in interventional spine procedures. This novel visualization approach may serve as a valuable adjunct tool during minimally invasive percutaneous spine treatment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A method to establish stimulus control and compliance with instructions.
Borgen, John G; Charles Mace, F; Cavanaugh, Brenna M; Shamlian, Kenneth; Lit, Keith R; Wilson, Jillian B; Trauschke, Stephanie L
2017-10-01
We evaluated a unique procedure to establish compliance with instructions in four young children diagnosed with autism spectrum disorder (ASD) who had low levels of compliance. Our procedure included methods to establish a novel therapist as a source of positive reinforcement, reliably evoke orienting responses to the therapist, increase the number of exposures to instruction-compliance-reinforcer contingencies, and minimize the number of exposures to instruction-noncompliance-no reinforcer contingencies. We further alternated between instructions with a high probability of compliance (high-p instructions) with instructions that had a prior low probability of compliance (low-p instructions) as soon as low-p instructions lost stimulus control. The intervention is discussed in relation to the conditions necessary for the development of stimulus control and as an example of a variation of translational research. © 2017 Society for the Experimental Analysis of Behavior.
Coran, Silvia A; Giannellini, Valerio; Bambagiotti-Alberti, Massimo
2004-08-06
A HPTLC-densitometric method, based on an external standard approach, was developed in order to obtain a novel procedure for routine analysis of secoisolariciresinol diglucoside (SDG) in flaxseed with a minimum of sample pre-treatment. Optimization of TLC conditions for the densitometric scanning was reached by eluting HPTLC silica gel plates in a horizontal developing chamber. Quantitation of SDG was performed in single beam reflectance mode by using a computer-controlled densitometric scanner and applying a five-point calibration in the 1.00-10.00 microg/spot range. As no sample preparation was required, the proposed HPTLC-densitometric procedure demonstrated to be reliable, yet using an external standard approach. The proposed method is precise, reproducible and accurate and can be employed profitably in place of HPLC for the determination of SDG in complex matrices.
Coracoid process x-ray investigation before Latarjet procedure: a radioanatomic study.
Bachy, Manon; Lapner, Peter L C; Goutallier, Daniel; Allain, Jérôme; Hernigou, Phillipe; Bénichou, Jacques; Zilber, Sébastien
2013-12-01
The purpose of this study was to determine whether a preoperative radiologic assessment of the coracoid process is predictive of the amount of bone available for coracoid transfer by the Latarjet procedure. Thirty-five patients with anterior instability undergoing a Latarjet procedure were included. A preoperative radiologic assessment was performed with the Bernageau and true anteroposterior (true AP) views. The length of the coracoid process was measured on both radiographic views and the values were compared with the length of the bone block during surgery. Statistical analysis was carried out by ANOVA and Wilcoxon tests (P < .05). On radiologic examination, the mean coracoid process length was 29 ± 4 and 33 ± 4 mm on the Bernageau and true AP views, respectively. The mean bone block length during surgery was 21.6 ± 2.7 mm. A significant correlation was found (P = .032) between the coracoid process length on the true AP view and the intraoperative bone block length. Preoperative planning for the Latarjet procedure, including graft orientation and screw placement, requires knowledge of the length of coracoid bone available for transfer. This can be facilitated with the use of preoperative standard radiographs, thus avoiding computed tomography. This planning allows the detection of coracoid process anatomic variations or the analysis of the remaining part of the coracoid process after failure of a first Latarjet procedure to avoid an iliac bone graft. Radiologic preoperative coracoid process measurement is an easy, reliable method to aid preoperative planning of the Latarjet procedure in primary surgery and reoperations. Copyright © 2013 Journal of Shoulder and Elbow Surgery Board of Trustees. All rights reserved.
Development and validation of the Chinese version of dry eye related quality of life scale.
Zheng, Bang; Liu, Xiao-Jing; Sun, Yue-Qian Fiona; Su, Jia-Zeng; Zhao, Yang; Xie, Zheng; Yu, Guang-Yan
2017-07-17
To develop the Chinese version of quality of life scale for dry eye patients based on the Impact of Dry Eye on Everyday Life (IDEEL) questionnaire and to assess the reliability and validity of the developed scale. The original IDEEL was adapted cross-culturally to Chinese language and further developed following standard procedures. A total of 100 Chinese patients diagnosed with dry eye syndrome were included to investigate the psychometric properties of the Chinese version of scale. Psychometric tests included internal consistency (Cronbach's ɑ coefficients), construct validity (exploratory factor analysis), and known-groups validity (the analysis of variance). The Chinese version of Dry Eye Related Quality of Life (CDERQOL) Scale contains 45 items classified into 5 domains. Good to excellent internal consistency reliability was demonstrated for all 5 domains (Cronbach's ɑ coefficients range from 0.716 to 0.913). Construct validity assessment indicated a consistent factorial structure of the CDERQOL scale with hypothesized construct, with the exception of "Dry Eye Symptom-Bother" domain. All domain scores were detected with significant difference across three severity groups of dry eye patients (P < 0.05) except for "Satisfaction with Treatment" domain, indicating good known-groups validity. The results indicated that the CDERQOL scale is a reliable and valid instrument for patients with dry eye syndrome among Chinese population, and could be used as a supplementary diagnostic and treatment-effectiveness measure.
Rizvi, Sakina J; Quilty, Lena C; Sproule, Beth A; Cyriac, Anna; Michael Bagby, R; Kennedy, Sidney H
2015-09-30
Anhedonia, a core symptom of Major Depressive Disorder (MDD), is predictive of antidepressant non-response. In contrast to the definition of anhedonia as a "loss of pleasure", neuropsychological studies provide evidence for multiple facets of hedonic function. The aim of the current study was to develop and validate the Dimensional Anhedonia Rating Scale (DARS), a dynamic scale that measures desire, motivation, effort and consummatory pleasure across hedonic domains. Following item selection procedures and reliability testing using data from community participants (N=229) (Study 1), the 17-item scale was validated in an online study with community participants (N=150) (Study 2). The DARS was also validated in unipolar or bipolar depressed patients (n=52) and controls (n=50) (Study 3). Principal components analysis of the 17-item DARS revealed a 4-component structure mapping onto the domains of anhedonia: hobbies, food/drink, social activities, and sensory experience. Reliability of the DARS subscales was high across studies (Cronbach's α=0.75-0.92). The DARS also demonstrated good convergent and divergent validity. Hierarchical regression analysis revealed the DARS showed additional utility over the Snaith-Hamilton Pleasure Scale (SHAPS) in predicting reward function and distinguishing MDD subgroups. These studies provide support for the reliability and validity of the DARS. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A real-time monitoring system for the facial nerve.
Prell, Julian; Rachinger, Jens; Scheller, Christian; Alfieri, Alex; Strauss, Christian; Rampp, Stefan
2010-06-01
Damage to the facial nerve during surgery in the cerebellopontine angle is indicated by A-trains, a specific electromyogram pattern. These A-trains can be quantified by the parameter "traintime," which is reliably correlated with postoperative functional outcome. The system presented was designed to monitor traintime in real-time. A dedicated hardware and software platform for automated continuous analysis of the intraoperative facial nerve electromyogram was specifically designed. The automatic detection of A-trains is performed by a software algorithm for real-time analysis of nonstationary biosignals. The system was evaluated in a series of 30 patients operated on for vestibular schwannoma. A-trains can be detected and measured automatically by the described method for real-time analysis. Traintime is monitored continuously via a graphic display and is shown as an absolute numeric value during the operation. It is an expression of overall, cumulated length of A-trains in a given channel; a high correlation between traintime as measured by real-time analysis and functional outcome immediately after the operation (Spearman correlation coefficient [rho] = 0.664, P < .001) and in long-term outcome (rho = 0.631, P < .001) was observed. Automated real-time analysis of the intraoperative facial nerve electromyogram is the first technique capable of reliable continuous real-time monitoring. It can critically contribute to the estimation of functional outcome during the course of the operative procedure.
Gimmon, Yoav; Jacob, Grinshpon; Lenoble-Hoskovec, Constanze; Büla, Christophe; Melzer, Itshak
2013-01-01
Decline in gait stability has been associated with increased fall risk in older adults. Reliable and clinically feasible methods of gait instability assessment are needed. This study evaluated the relative and absolute reliability and concurrent validity of the testing procedure of the clinical version of the Narrow Path Walking Test (NPWT) under single task (ST) and dual task (DT) conditions. Thirty independent community-dwelling older adults (65-87 years) were tested twice. Participants were instructed to walk within the 6-m narrow path without stepping out. Trial time, number of steps, trial velocity, number of step errors, and number of cognitive task errors were determined. Intraclass correlation coefficients (ICCs) were calculated as indices of agreement, and a graphic approach called "mountain plot" was applied to help interpret the direction and magnitude of disagreements between testing procedures. Smallest detectable change and smallest real difference (SRD) were computed to determine clinically relevant improvement at group and individual levels, respectively. Concurrent validity was assessed using Performance Oriented Mobility Assessment Tool (POMA) and the Short Physical Performance Battery (SPPB). Test-retest agreement (ICC1,2) varied from 0.77 to 0.92 in ST and from 0.78 to 0.92 in DT conditions, with no apparent systematic differences between testing procedures demonstrated by the mountain plot graphs. Smallest detectable change and smallest real change were small for motor task performance and larger for cognitive errors. Significant correlations were observed for trial velocity and trial time with POMA and SPPB. The present results indicate that the NPWT testing procedure is highly reliable and reproducible. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Campbell, J.; Dean, J.; Clyne, T. W.
2017-02-01
This study concerns a commonly-used procedure for evaluating the steady state creep stress exponent, n, from indentation data. The procedure involves monitoring the indenter displacement history under constant load and making the assumption that, once its velocity has stabilised, the system is in a quasi-steady state, with stage II creep dominating the behaviour. The stress and strain fields under the indenter are represented by "equivalent stress" and "equivalent strain rate" values. The estimate of n is then obtained as the gradient of a plot of the logarithm of the equivalent strain rate against the logarithm of the equivalent stress. Concerns have, however, been expressed about the reliability of this procedure, and indeed it has already been shown to be fundamentally flawed. In the present paper, it is demonstrated, using a very simple analysis, that, for a genuinely stable velocity, the procedure always leads to the same, constant value for n (either 1.0 or 0.5, depending on whether the tip shape is spherical or self-similar). This occurs irrespective of the value of the measured velocity, or indeed of any creep characteristic of the material. It is now clear that previously-measured values of n, obtained using this procedure, have varied in a more or less random fashion, depending on the functional form chosen to represent the displacement-time history and the experimental variables (tip shape and size, penetration depth, etc.), with little or no sensitivity to the true value of n.
Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul
2017-11-01
The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana L.; Bigler, Mark A.
2017-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. To determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators, and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules, and operational requirements are developed and then finalized.
Moore, L; Tapper, K; Dennehy, A; Cooper, A
2005-07-01
To evaluate the validity, reliability and sensitivity of a computerised single day 24-h recall questionnaire designed for the comparison of children's fruit and snack consumption at the group (school) level. Relative validity and reliability were assessed in relation to (i) intake at school and (ii) intake throughout the whole day, using diary-assisted 24-h recall interviews and a 7-day test-retest procedure. Sensitivity was assessed in relation to intake by comparing results from schools with differing food policies, and by sex. Eight schools took part in the validity and reliability assessments, with 78 children completing the 24-h recall interviews and 195 children completing the test-retest procedure. A total of 43 schools (1890 children) took part in the sensitivity analysis. All children were aged 9-11 y. All schools were in South Wales and South-west England. For fruit intake at school, the questionnaire showed fair levels of validity at the individual level (kappa = 0.29). At the group level, there were little or no differences in fruit intake at school between the two measures and two occasions. The questionnaire was sufficiently sensitive to identify statistically significant differences between girls and boys, and between schools with different food policies. For snack intake at school, validity at the individual level was slightly lower (kappa = 0.220.25), but the data remained of value in analyses at the group level. For fruit and snack intake throughout the whole day there was little agreement at the individual level (kappa = 0.00-0.06), and at the group level there tended to be substantial differences between the two measures and two occasions. The computerised questionnaire is a quick and cost-effective means of assessing children's consumption of fruit at school. While further development is required to improve validity and reliability, it has the potential to be particularly useful in randomised controlled trials of school-based dietary interventions.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana; Bigler, Mark
2016-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
A Systems Modeling Approach for Risk Management of Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila
2012-01-01
The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.
Reliability of risk-adjusted outcomes for profiling hospital surgical quality.
Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B
2014-05-01
Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.
Kevern, Mark A.; Beecher, Michael; Rao, Smita
2014-01-01
Context: Athletes who participate in throwing and racket sports consistently demonstrate adaptive changes in glenohumeral-joint internal and external rotation in the dominant arm. Measurements of these motions have demonstrated excellent intrarater and poor interrater reliability. Objective: To determine intrarater reliability, interrater reliability, and standard error of measurement for shoulder internal rotation, external rotation, and total arc of motion using an inclinometer in 3 testing procedures in National Collegiate Athletic Association Division I baseball and softball athletes. Design: Cross-sectional study. Setting: Athletic department. Patients or Other Participants Thirty-eight players participated in the study. Shoulder internal rotation, external rotation, and total arc of motion were measured by 2 investigators in 3 test positions. The standard supine position was compared with a side-lying test position, as well as a supine test position without examiner overpressure. Results: Excellent intrarater reliability was noted for all 3 test positions and ranges of motion, with intraclass correlation coefficient values ranging from 0.93 to 0.99. Results for interrater reliability were less favorable. Reliability for internal rotation was highest in the side-lying position (0.68) and reliability for external rotation and total arc was highest in the supine-without-overpressure position (0.774 and 0.713, respectively). The supine-with-overpressure position yielded the lowest interrater reliability results in all positions. The side-lying position had the most consistent results, with very little variation among intraclass correlation coefficient values for the various test positions. Conclusions: The results of our study clearly indicate that the side-lying test procedure is of equal or greater value than the traditional supine-with-overpressure method. PMID:25188316
Baczyńska, Anna K.; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach’s alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed. PMID:27014111
Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
Buti, Jacopo; Baccini, Michela; Nieri, Michele; La Marca, Michele; Pini-Prato, Giovan P
2013-04-01
The aim of this work was to conduct a Bayesian network meta-analysis (NM) of randomized controlled trials (RCTs) to establish a ranking in efficacy and the best technique for coronally advanced flap (CAF)-based root coverage procedures. A literature search on PubMed, Cochrane libraries, EMBASE, and hand-searched journals until June 2012 was conducted to identify RCTs on treatments of Miller Class I and II gingival recessions with at least 6 months of follow-up. The treatment outcomes were recession reduction (RecRed), clinical attachment gain (CALgain), keratinized tissue gain (KTgain), and complete root coverage (CRC). Twenty-nine studies met the inclusion criteria, 20 of which were classified as at high risk of bias. The CAF+connective tissue graft (CTG) combination ranked highest in effectiveness for RecRed (Probability of being the best = 40%) and CALgain (Pr = 33%); CAF+enamel matrix derivative (EMD) was slightly better for CRC; CAF+Collagen Matrix (CM) appeared effective for KTgain (Pr = 69%). Network inconsistency was low for all outcomes excluding CALgain. CAF+CTG might be considered the gold standard in root coverage procedures. The low amount of inconsistency gives support to the reliability of the present findings. © 2012 John Wiley & Sons A/S.
Behboudi, S; Morein, B; Rönnberg, B
1995-12-01
In the iscom, multiple copies of antigen are attached by hydrophobic interaction to a matrix which is built up by Quillaja triterpenoid saponins and lipids. Thus, the iscom presents antigen in multimeric form in a small particle with a built-in adjuvant resulting in a highly immunogenic antigen formulation. We have designed a chloroform-methanol-water extraction procedure to isolate the triterpenoid saponins and lipids incorporated into iscom-matrix and iscoms. The triterpenoids in the triterpenoid phase were quantitated using orcinol sulfuric acid detecting their carbohydrate chains and by HPLC. The cholesterol and phosphatidylcholine in the lipid phase were quantitated by HPLC and a commercial colorimetric method for the cholesterol. The quantitative methods showed an almost total separation and recovery of triterpenoids and lipids in their respective phases, while protein was detected in all phases after extraction. The protein content was determined by the method of Lowry and by amino acid analysis. Amino acid analysis was shown to be the reliable method of the two to quantitate proteins in iscoms. In conclusion, simple, reproducible and efficient procedures have been designed to isolate and quantitate the triterpenoids and lipids added for preparation of iscom-matrix and iscoms. The procedures described should also be useful to adequately define constituents in prospective vaccines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Chris, E-mail: cyuan@uwm.edu; Wang, Endong; Zhai, Qiang
Temporal homogeneity of inventory data is one of the major problems in life cycle assessment (LCA). Addressing temporal homogeneity of life cycle inventory data is important in reducing the uncertainties and improving the reliability of LCA results. This paper attempts to present a critical review and discussion on the fundamental issues of temporal homogeneity in conventional LCA and propose a theoretical framework for temporal discounting in LCA. Theoretical perspectives for temporal discounting in life cycle inventory analysis are discussed first based on the key elements of a scientific mechanism for temporal discounting. Then generic procedures for performing temporal discounting inmore » LCA is derived and proposed based on the nature of the LCA method and the identified key elements of a scientific temporal discounting method. A five-step framework is proposed and reported in details based on the technical methods and procedures needed to perform a temporal discounting in life cycle inventory analysis. Challenges and possible solutions are also identified and discussed for the technical procedure and scientific accomplishment of each step within the framework. - Highlights: • A critical review for temporal homogeneity problem of life cycle inventory data • A theoretical framework for performing temporal discounting on inventory data • Methods provided to accomplish each step of the temporal discounting framework.« less
Selenide isotope generator for the Galileo mission. Reliability program plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-10-01
The reliability program plan for the Selenide Isotope Generator (SIG) program is presented. It delineates the specific tasks that will be accomplished by Teledyne Energy Systems and its suppliers during design, development, fabrication and test of deliverable Radioisotopic Thermoelectric Generators (RTG), Electrical Heated Thermoelectric Generators (ETG) and associated Ground Support Equipment (GSE). The Plan is formulated in general accordance with procedures specified in DOE Reliability Engineering Program Requirements Publication No. SNS-2, dated June 17, 1974. The Reliability Program Plan presented herein defines the total reliability effort without further reference to Government Specifications. The reliability tasks to be accomplished are delineatedmore » herein and become the basis for contract compliance to the extent specified in the SIG contract Statement of Work.« less
Robotic equipment malfunction during robotic prostatectomy: a multi-institutional study.
Lavery, Hugh J; Thaly, Rahul; Albala, David; Ahlering, Thomas; Shalhav, Arieh; Lee, David; Fagin, Randy; Wiklund, Peter; Dasgupta, Prokar; Costello, Anthony J; Tewari, Ashutosh; Coughlin, Geoff; Patel, Vipul R
2008-09-01
Robotic-assisted laparoscopic prostatectomy (RALP) is growing in popularity as a treatment option for prostate cancer. As a new technology, little is known regarding the reliability of the da Vinci robotic system. Intraoperative robotic equipment malfunction may force the surgeon to convert the procedure to an open or pure laparoscopic procedure, or possibly even abort the procedure. We report the first large-scale, multi-institutional review of robotic equipment malfunction. A questionnaire was designed to evaluate the rate of perioperative robotic malfunction during RALP. High-volume, experienced surgeons were asked to complete this evaluation based on the analysis of their data. Questions included the overall number of RALPs performed, the number of equipment malfunctions, the number of procedures that had to be converted or aborted, and the part of the robotic system that malfunctioned. Eleven institutions participated in the study with a median surgeon volume of 700 cases, accounting for a total case volume of 8240. Critical failure occurred in 34 cases (0.4%) leading to the cancellation of 24 cases prior to the procedure, and the conversion to two laparoscopic and eight open procedures. The most common components of the robot to malfunction were the arms and optical system. Critical robotic equipment malfunction is extremely rare in institutions that perform high volumes of RALPs, with a nonrecoverable malfunction rate of only 0.4%.
Caciagli, P; Verderio, A
2003-06-30
Several aspects of enzyme-linked immunosorbent assay (ELISA) procedures and data analysis have been examined in an attempt to find a rapid and reliable method for discriminating between 'positive' and 'negative' results when testing a large number of samples. A layout of ELISA plates was designed to reduce uncontrolled variation and to optimize the number of negative and positive controls. A transformation using the fourth root (A(1/4)) of the optical density readings corrected for the blank (A) stabilized the variance of most ELISA data examined. Transformed A values were used to calculate the true limits, at a set protection level, for false positive (C) and false negative (D). Methods are discussed to reduce the number of undifferentiated samples, i.e. the samples with response falling between C and D. The whole procedure was set up for use with an electronic spreadsheet. With the addition of few instructions of the type 'if em leader then em leader else' in the spreadsheet, the ELISA results were obtained in the simple trichotomous form 'negative/undefined/positive'. This allowed rapid analysis of more than 1100 maize samples testing for the presence of seven aphid-borne viruses-in fact almost 8000 ELISA samples.
Computational approaches to standard-compliant biofilm data for reliable analysis and integration.
Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália
2012-12-01
The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).
Computational approaches to standard-compliant biofilm data for reliable analysis and integration.
Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália
2012-07-24
The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).
ERIC Educational Resources Information Center
Harris, Larry P.; Wolf, Steven R.
1979-01-01
The article focuses on the controversy over norm-referenced v criterion-referenced measures (CRM) in assessment of learning disorders. The authors contend that while the reliability of CRMs is generally indisputable, the validity of measures designed from local curricula is still dependent on the intuitive judgments of teachers. (Author/SBH)
ERIC Educational Resources Information Center
Williams, Lunetta M.; Hall, Katrina W.; Hedrick, Wanda B.; Lamkin, Marcia; Abendroth, Jennifer
2013-01-01
The purpose of the present study was to develop an instrument to measure reading during in-school independent reading (ISIR). Procedures to establish validity and reliability of the instrument included videotaping and observing students during ISIR, gathering feedback from literacy experts, establishing interrater reliability, crosschecking…
ERIC Educational Resources Information Center
Romer, Natalie; Merrell, Kenneth W.
2013-01-01
This study focused on evaluating the temporal stability of self-reported and teacher-reported perceptions of students' social and emotional skills and assets. We used a test-retest reliability procedure over repeated administrations of the child, adolescent, and teacher versions of the "Social-Emotional Assets and Resilience Scales".…
Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir
2010-01-01
A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…
Rönspies, Jelena; Schmidt, Alexander F; Melnikova, Anna; Krumova, Rosina; Zolfagari, Asadeh; Banse, Rainer
2015-07-01
The present study was conducted to validate an adaptation of the Implicit Relational Assessment Procedure (IRAP) as an indirect latency-based measure of sexual orientation. Furthermore, reliability and criterion validity of the IRAP were compared to two established indirect measures of sexual orientation: a Choice Reaction Time task (CRT) and a Viewing Time (VT) task. A sample of 87 heterosexual and 35 gay men completed all three indirect measures in an online study. The IRAP and the VT predicted sexual orientation nearly perfectly. Both measures also showed a considerable amount of convergent validity. Reliabilities (internal consistencies) reached satisfactory levels. In contrast, the CRT did not tap into sexual orientation in the present study. In sum, the VT measure performed best, with the IRAP showing only slightly lower reliability and criterion validity, whereas the CRT did not yield any evidence of reliability or criterion validity in the present research. The results were discussed in the light of specific task properties of the indirect latency-based measures (task-relevance vs. task-irrelevance).
CTEPP STANDARD OPERATING PROCEDURE FOR TRANSLATING VIDEOTAPES OF CHILD ACTIVITIES (SOP-4.13)
The EPA will conduct a two-day video translation workshop to demonstrate to coders the procedures for translating the activity patterns of preschool children on videotape. The coders will be required to pass reliability tests to successfully complete the training requirements of ...
Lee, Yi-Hui; Salman, Ali
2016-11-01
Spirituality and spiritual well-being have emerged as important indicators for one's quality of life and health outcomes. Nursing as a profession is concerned with a holistic approach to improve health and overall well-being. To evaluate the outcomes of holistic nursing interventions, using valid and reliable instruments to assess spiritual well-being becomes necessary. There is a lack of instruments for measuring spiritual well-being in Chinese populations. Little has been known about the feasibility of using the Spirituality Index of Well-Being (SIWB) in Taiwanese elders. The purpose of this cross-sectional study was to evaluate the uses of the translated Chinese version of the Spirituality Index of Well-Being (SIWB-C) with Taiwanese elders. A total of 150 individual who were 65 years old or older and living in southern Taiwan were recruited from a public community center. A four-step procedure was used to translate the English version of the SIWB to the traditional Chinese language. Internal consistency, factor analysis, and correlation coefficient were conducted to evaluate the reliability and validity of the SIWB-C. The SIWB-C demonstrated a high internal consistency with Cronbach's alpha .95. The construct validity of SIWB-C was supported by factor analysis and by significant correlations with its subscales and the CES-D scale. The psychometric analysis indicates that the SIWB-C is a valid and reliable instrument for measuring spiritual well-being. This instrument provides a feasible and valid approach for assessing Taiwanese elders' spiritual well-being in the future. Copyright © 2016 Elsevier Inc. All rights reserved.
Factor structure of the Childhood Autism Rating Scale as per DSM-5.
Park, Eun-Young; Kim, Joungmin
2016-02-01
The DSM-5 recently proposed new diagnostic criteria for autism spectrum disorder (ASD). Although many new or updated tools have been developed since the DSM-IV was published in 1994, the Childhood Autism Rating Scale (CARS) has been used consistently in ASD diagnosis and research due to its technical adequacy, cost-effectiveness, and practicality. Additionally, items in the CARS did not alter following the release of the revised DSM-IV because the CARS factor structure was found to be consistent with the revised criteria after factor analysis. For that reason, in this study confirmatory factor analysis was used to identify the factor structure of the CARS. Participants (n = 150) consisted of children with an ASD diagnosis or who met the criteria for broader autism or emotional/behavior disorder with comorbid disorders such as attention-deficit hyperactivity disorder, bipolar disorder, intellectual or developmental disabilities. Previous studies used one-, two-, and four-factor models, all of which we examined to confirm the best-fit model on confirmatory factor analysis. Appropriate comparative fit indices and root mean square errors were obtained for all four models. The two-factor model, based on DSM-5 criteria, was the most valid and reliable. The inter-item consistency of the CARS was 0.926 and demonstrated adequate reliability, thereby supporting the validity and reliability of the two-factor model of CARS. Although CARS was developed prior to the introduction of DSM-5, its psychometric properties, conceptual relevance, and flexible administration procedures support its continued role as a screening device in the diagnostic decision-making process. © 2015 Japan Pediatric Society.
Reliable samples of quasars and hot stars from a spectrophotometric survey of the U.S. catalogs
NASA Technical Reports Server (NTRS)
Mitchell, Kenneth J.
1987-01-01
The U.S. survey for blue- and ultraviolet-excess starlike objects is reviewed, focusing on the features which have contributed to its accuracy. The spectrophotometric survey is described in terms of the observational setup and procedures. It is suggested that the survey has produced reliably classified samples of quasars and hot evolved stars and that the procedures used in the study provide a means of deriving distance and luminosity information about these objects. Several cumulative number counts and spectra of a DA white dwarf and a quasar with prominent C IV and C III emission are given as examples.
Reliability of assessment of critical thinking.
Allen, George D; Rubenfeld, M Gaie; Scheffer, Barbara K
2004-01-01
Although clinical critical thinking skills and behaviors are among the most highly sought characteristics of BSN graduates, they remain among the most difficult to teach and assess. Three reasons for this difficulty have been (1) lack of agreement among nurse educators as to the definition of critical thinking, (2) low correlation between clinical critical thinking and existing standardized tests of critical thinking, and (3) poor reliability in scoring other evidences of critical thinking, such as essays. This article first describes a procedure for teaching critical thinking that is based on a consensus definition of 17 dimensions of critical thinking in clinical nursing practice. This procedure is easily taught to nurse educators and can be flexibly and inexpensively incorporated into any undergraduate nursing curriculum. We then show that students' understanding and use of these dimensions can be assessed with high reliability (coefficient alpha between 0.7 and 0.8) and with great time efficiency for both teachers and students. By using this procedure iteratively across semesters, students can develop portfolios demonstrating attainment of competence in clinical critical thinking, and educators can obtain important summary evaluations of the degree to which their graduates have succeeded in this important area of their education.
Intradiscal Pressure Changes during Manual Cervical Distraction: A Cadaveric Study
Gudavalli, M. R.; Potluri, T.; Carandang, G.; Havey, R. M.; Voronov, L. I.; Cox, J. M.; Rowell, R. M.; Kruse, R. A.; Joachim, G. C.; Patwardhan, A. G.; Henderson, C. N. R.; Goertz, C.
2013-01-01
The objective of this study was to measure intradiscal pressure (IDP) changes in the lower cervical spine during a manual cervical distraction (MCD) procedure. Incisions were made anteriorly, and pressure transducers were inserted into each nucleus at lower cervical discs. Four skilled doctors of chiropractic (DCs) performed MCD procedure on nine specimens in prone position with contacts at C5 or at C6 vertebrae with the headpiece in different positions. IDP changes, traction forces, and manually applied posterior-to-anterior forces were analyzed using descriptive statistics. IDP decreases were observed during MCD procedure at all lower cervical levels C4-C5, C5-C6, and C6-C7. The mean IDP decreases were as high as 168.7 KPa. Mean traction forces were as high as 119.2 N. Posterior-to-anterior forces applied during manual traction were as high as 82.6 N. Intraclinician reliability for IDP decrease was high for all four DCs. While two DCs had high intraclinician reliability for applied traction force, the other two DCs demonstrated only moderate reliability. IDP decreases were greatest during moving flexion and traction. They were progressevely less pronouced with neutral traction, fixed flexion and traction, and generalized traction. PMID:24023587