Sample records for human reliability analysis

  1. 75 FR 5633 - Notice of Extension of Comment Period for NUREG-1921, EPRI/NRC-RES Fire Human Reliability...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ..., EPRI/NRC- RES Fire Human Reliability Analysis Guidelines, Draft Report for Comment AGENCY: Nuclear... Human Reliability Analysis Guidelines, Draft Report for Comment'' (December 11, 2009; 74 FR 65810). This... Human Reliability Analysis Guidelines'' is available electronically under ADAMS Accession Number...

  2. Culture Representation in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Gertman; Julie Marble; Steven Novack

    Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less

  3. Human Reliability Analysis in Support of Risk Assessment for Positive Train Control

    DOT National Transportation Integrated Search

    2003-06-01

    This report describes an approach to evaluating the reliability of human actions that are modeled in a probabilistic risk assessment : (PRA) of train control operations. This approach to human reliability analysis (HRA) has been applied in the case o...

  4. Method of Testing and Predicting Failures of Electronic Mechanical Systems

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, Frances A.

    1996-01-01

    A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.

  5. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    PubMed

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes

    2017-10-01

    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.

  6. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  7. Fifty Years of THERP and Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2012-06-01

    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø Nationalmore » Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.« less

  8. Tackling reliability and construct validity: the systematic development of a qualitative protocol for skill and incident analysis.

    PubMed

    Savage, Trevor Nicholas; McIntosh, Andrew Stuart

    2017-03-01

    It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.

  9. IDHEAS – A NEW APPROACH FOR HUMAN RELIABILITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. W. Parry; J.A Forester; V.N. Dang

    2013-09-01

    This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure eventmore » (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.« less

  10. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  11. The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.

    2010-01-01

    HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.

  12. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  13. The Challenges of Credible Thermal Protection System Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2013-01-01

    The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.

  14. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  15. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  16. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  17. Human reliability in petrochemical industry: an action research.

    PubMed

    Silva, João Alexandre Pinheiro; Camarotto, João Alberto

    2012-01-01

    This paper aims to identify conflicts and gaps between the operators' strategies and actions and the organizational managerial approach for human reliability. In order to achieve these goals, the research approach adopted encompasses literature review, mixing action research methodology and Ergonomic Workplace Analysis in field research. The result suggests that the studied company has a classical and mechanistic point of view focusing on error identification and building barriers through procedures, checklists and other prescription alternatives to improve performance in reliability area. However, it was evident the fundamental role of the worker as an agent of maintenance and construction of system reliability during the action research cycle.

  18. Task Decomposition in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids; Joe, Jeffrey Clark

    2014-06-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  19. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  20. Human Factors in Financial Trading: An Analysis of Trading Incidents.

    PubMed

    Leaver, Meghan; Reader, Tom W

    2016-09-01

    This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors-related issues in operational trading incidents. In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors-related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. © 2016, Human Factors and Ergonomics Society.

  1. Reliability of drivers in urban intersections.

    PubMed

    Gstalter, Herbert; Fastenmeier, Wolfgang

    2010-01-01

    The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.

  2. The Development of Dynamic Human Reliability Analysis Simulations for Inclusion in Risk Informed Safety Margin Characterization Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. Joe; Diego Mandelli; Ronald L. Boring

    2015-07-01

    The United States Department of Energy is sponsoring the Light Water Reactor Sustainability program, which has the overall objective of supporting the near-term and the extended operation of commercial nuclear power plants. One key research and development (R&D) area in this program is the Risk-Informed Safety Margin Characterization pathway, which combines probabilistic risk simulation with thermohydraulic simulation codes to define and manage safety margins. The R&D efforts to date, however, have not included robust simulations of human operators, and how the reliability of human performance or lack thereof (i.e., human errors) can affect risk-margins and plant performance. This paper describesmore » current and planned research efforts to address the absence of robust human reliability simulations and thereby increase the fidelity of simulated accident scenarios.« less

  3. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  4. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Grenander, Sven; Evensen, Ken

    2011-01-01

    center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

  5. Top-down and bottom-up definitions of human failure events in human reliability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids

    2014-10-01

    In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  6. PROOF OF CONCEPT FOR A HUMAN RELIABILITY ANALYSIS METHOD FOR HEURISTIC USABILITY EVALUATION OF SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman; Jeffrey C. Joe

    2005-09-01

    An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less

  7. On Space Exploration and Human Error: A Paper on Reliability and Safety

    NASA Technical Reports Server (NTRS)

    Bell, David G.; Maluf, David A.; Gawdiak, Yuri

    2005-01-01

    NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability

  8. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  9. The Use Of Computational Human Performance Modeling As Task Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacuqes Hugo; David Gertman

    2012-07-01

    During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less

  10. The development and psychometric analysis of the Chinese HIV-Related Fatigue Scale.

    PubMed

    Li, Su-Yin; Wu, Hua-Shan; Barroso, Julie

    2016-04-01

    To develop a Chinese version of the human immunodeficiency virus-related Fatigue Scale and examine its reliability and validity. Fatigue is found in more than 70% of people infected with human immunodeficiency virus. However, a scale to assess fatigue in human immunodeficiency virus-positive people has not yet been developed for use in Chinese-speaking countries. A methodologic study involving instrument development and psychometric evaluation was used. The human immunodeficiency virus-related Fatigue Scale was examined through a two-step procedure: (1) translation and back translation and (2) psychometric analysis. A sample of 142 human immunodeficiency virus-positive patients was recruited from the Infectious Disease Outpatient Clinic in central Taiwan. Their fatigue data were analysed with Cronbach's α for internal consistency. Two weeks later, the data of a random sample of 28 patients from the original 142 were analysed for test-retest reliability. The correlation between the World Health Organization Quality of Life Assessment-Human Immunodeficiency Virus and the Chinese version of the human immunodeficiency virus-related Fatigue Scale was analysed for concurrent validity. The Chinese version of the human immunodeficiency virus-related Fatigue Scale scores of human immunodeficiency virus-positive patients with highly active antiretroviral therapy and those without were compared to demonstrate construct validity. The internal consistency and test-retest reliability of the Chinese version of the human immunodeficiency virus-related Fatigue Scale were 0·97 and 0·686, respectively. In regard to concurrent validity, a negative correlation was found between the scores of the Chinese version of the human immunodeficiency virus-related Fatigue Scale and the World Health Organization Quality of Life Assessment-Human Immunodeficiency Virus. Additionally, the Chinese version of the human immunodeficiency virus-related Fatigue Scale could be used to effectively distinguish fatigue differences between the human immunodeficiency virus-positive patients with highly active antiretroviral therapy and those without. The Chinese version of the human immunodeficiency virus-related Fatigue Scale presents good reliability and validity through a robust psychometric analysis. This scale can be appropriately applied to human immunodeficiency virus-positive patients by clinical staff and case managers in Chinese-speaking countries. The Chinese version of the human immunodeficiency virus-related Fatigue Scale is an effective and comprehensive tool that can help clinical professionals measure the frequency, strength and impact on the quality of life of fatigue in Chinese human immunodeficiency virus-positive patients. © 2016 John Wiley & Sons Ltd.

  11. Lessons Learned from Dependency Usage in HERA: Implications for THERP-Related HRA Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    April M. Whaley; Ronald L. Boring; Harold S. Blackman

    Dependency occurs when the probability of success or failure on one action changes the probability of success or failure on a subsequent action. Dependency may serve as a modifier on the human error probabilities (HEPs) for successive actions in human reliability analysis (HRA) models. Discretion should be employed when determining whether or not a dependency calculation is warranted: dependency should not be assigned without strongly grounded reasons. Human reliability analysts may sometimes assign dependency in cases where it is unwarranted. This inappropriate assignment is attributed to a lack of clear guidance to encompass the range of scenarios human reliability analystsmore » are addressing. Inappropriate assignment of dependency produces inappropriately elevated HEP values. Lessons learned about dependency usage in the Human Event Repository and Analysis (HERA) system may provide clarification and guidance for analysts using first-generation HRA methods. This paper presents the HERA approach to dependency assessment and discusses considerations for dependency usage in HRA, including the cognitive basis for dependency, direction for determining when dependency should be assessed, considerations for determining the dependency level, temporal issues to consider when assessing dependency, (e.g., considering task sequence versus overall event sequence, and dependency over long periods of time), and diagnosis and action influences on dependency.« less

  12. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Rasmussen, Martin

    2016-06-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: •more » Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.« less

  13. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  14. Defining Human Failure Events for Petroleum Risk Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Knut Øien

    2014-06-01

    In this paper, an identification and description of barriers and human failure events (HFEs) for human reliability analysis (HRA) is performed. The barriers, called target systems, are identified from risk significant accident scenarios represented as defined situations of hazard and accident (DSHAs). This report serves as the foundation for further work to develop petroleum HFEs compatible with the SPAR-H method and intended for reuse in future HRAs.

  15. Application of objective clinical human reliability analysis (OCHRA) in assessment of technical performance in laparoscopic rectal cancer surgery.

    PubMed

    Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K

    2016-06-01

    Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p < 0.001). Within the pelvis, more errors were observed during dissection on the right side than the left (p = 0.03). Test-retest confirmed reliability (r = 0.97, p < 0.001). A significant correlation was observed between error frequency and mesorectal specimen quality (r s = 0.52, p = 0.02) and with blood loss (r s = 0.609, p = 0.004). OCHRA offers a valid and reliable method for evaluating technical performance of laparoscopic rectal surgery.

  16. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  17. Development of an Integrated Human Factors Toolkit

    NASA Technical Reports Server (NTRS)

    Resnick, Marc L.

    2003-01-01

    An effective integration of human abilities and limitations is crucial to the success of all NASA missions. The Integrated Human Factors Toolkit facilitates this integration by assisting system designers and analysts to select the human factors tools that are most appropriate for the needs of each project. The HF Toolkit contains information about a broad variety of human factors tools addressing human requirements in the physical, information processing and human reliability domains. Analysis of each tool includes consideration of the most appropriate design stage, the amount of expertise in human factors that is required, the amount of experience with the tool and the target job tasks that are needed, and other factors that are critical for successful use of the tool. The benefits of the Toolkit include improved safety, reliability and effectiveness of NASA systems throughout the agency. This report outlines the initial stages of development for the Integrated Human Factors Toolkit.

  18. Analysis Testing of Sociocultural Factors Influence on Human Reliability within Sociotechnical Systems: The Algerian Oil Companies.

    PubMed

    Laidoune, Abdelbaki; Rahal Gharbi, Med El Hadi

    2016-09-01

    The influence of sociocultural factors on human reliability within an open sociotechnical systems is highlighted. The design of such systems is enhanced by experience feedback. The study was focused on a survey related to the observation of working cases, and by processing of incident/accident statistics and semistructured interviews in the qualitative part. In order to consolidate the study approach, we considered a schedule for the purpose of standard statistical measurements. We tried to be unbiased by supporting an exhaustive list of all worker categories including age, sex, educational level, prescribed task, accountability level, etc. The survey was reinforced by a schedule distributed to 300 workers belonging to two oil companies. This schedule comprises 30 items related to six main factors that influence human reliability. Qualitative observations and schedule data processing had shown that the sociocultural factors can negatively and positively influence operator behaviors. The explored sociocultural factors influence the human reliability both in qualitative and quantitative manners. The proposed model shows how reliability can be enhanced by some measures such as experience feedback based on, for example, safety improvements, training, and information. With that is added the continuous systems improvements to improve sociocultural reality and to reduce negative behaviors.

  19. Philosophy of ATHEANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, D.C.; Cooper, S.E.; Forester, J.A.

    ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.

  20. Twenty-fifth water reactor safety information meeting: Proceedings. Volume 2: Human reliability analysis and human performance evaluation; Technical issues related to rulemakings; Risk-informed, performance-based initiatives; High burn-up fuel research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monteleone, S.

    1998-03-01

    This three-volume report contains papers presented at the conference. The papers are printed in the order of their presentation in each session and describe progress and results of programs in nuclear safety research conducted in this country and abroad. Foreign participation in the meeting included papers presented by researchers from France, Japan, Norway, and Russia. The titles of the papers and the names of the authors have been updated and may differ from those that appeared in the final program of the meeting. This volume contains the following: (1) human reliability analysis and human performance evaluation; (2) technical issues relatedmore » to rulemakings; (3) risk-informed, performance-based initiatives; and (4) high burn-up fuel research. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  1. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Attribute Ratings and Profiles of the Job Elements of the Position Analysis Questionnaire (PAQ).

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The primary purpose of this study was to obtain estimates of the human attribute requirements of the job elements of the Position Analysis Questionnaire (PAQ). A secondary purpose was to explore the reliability of job-related ratings as a function of the number of raters. A taxonomy of 76 human attributes was used and ratings of the relevance of…

  3. Sex determination of human remains from peptides in tooth enamel.

    PubMed

    Stewart, Nicolas Andre; Gerlach, Raquel Fernanda; Gowland, Rebecca L; Gron, Kurt J; Montgomery, Janet

    2017-12-26

    The assignment of biological sex to archaeological human skeletons is a fundamental requirement for the reconstruction of the human past. It is conventionally and routinely performed on adults using metric analysis and morphological traits arising from postpubertal sexual dimorphism. A maximum accuracy of ∼95% is possible if both the cranium and os coxae are present and intact, but this is seldom achievable for all skeletons. Furthermore, for infants and juveniles, there are no reliable morphological methods for sex determination without resorting to DNA analysis, which requires good DNA survival and is time-consuming. Consequently, sex determination of juvenile remains is rarely undertaken, and a dependable and expedient method that can correctly assign biological sex to human remains of any age is highly desirable. Here we present a method for sex determination of human remains by means of a minimally destructive surface acid etching of tooth enamel and subsequent identification of sex chromosome-linked isoforms of amelogenin, an enamel-forming protein, by nanoflow liquid chromatography mass spectrometry. Tooth enamel is the hardest tissue in the human body and survives burial exceptionally well, even when the rest of the skeleton or DNA in the organic fraction has decayed. Our method can reliably determine the biological sex of humans of any age using a body tissue that is difficult to cross-contaminate and is most likely to survive. The application of this method will make sex determination of adults and, for the first time, juveniles a reliable and routine activity in future bioarcheological and medico-legal science contexts. Copyright © 2017 the Author(s). Published by PNAS.

  4. Evaluation of a Human Factors Analysis and Classification System as used by simulated mishap boards.

    PubMed

    O'Connor, Paul; Walker, Peter

    2011-01-01

    The reliability of the Department of Defense Human Factors Analysis and Classification System (DOD-HFACS) has been examined when used by individuals working alone to classify the causes of summary, or partial, information about a mishap. However, following an actual mishap a team of investigators would work together to gather and analyze a large amount of information before identifying the causal factors and coding them with DOD-HFACS. There were 204 military Aviation Safety Officer students who were divided into 30 groups. Each group was provided with evidence collected from one of two military aviation mishaps. DOD-HFACS was used to classify the mishap causal factors. Averaged across the two mishaps, acceptable levels of reliability were only achieved for 56.9% of nanocodes. There were high levels of agreement regarding the factors that did not contribute to the incident (a mean agreement of 50% or greater between groups for 91.0% of unselected nanocodes); the level of agreement on the factors that did cause the incident as classified using DOD-HFACS were low (a mean agreement of 50% or greater between the groups for 14.6% of selected nanocodes). Despite using teams to carry out the classification, the findings from this study are consistent with other studies of DOD-HFACS reliability with individuals. It is suggested that in addition to simplifying DOD-HFACS itself, consideration should be given to involving a human factors/organizational psychologist in mishap investigations to ensure the human factors issues are identified and classified in a consistent and reliable manner.

  5. Bridging Human Reliability Analysis and Psychology, Part 2: A Cognitive Framework to Support HRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring

    This is the second of two papers that discuss the literature review conducted as part of the U.S. Nuclear Regulatory Commission (NRC) effort to develop a hybrid human reliability analysis (HRA) method in response to Staff Requirements Memorandum (SRM) SRM-M061020. This review was conducted with the goal of strengthening the technical basis within psychology, cognitive science and human factors for the hybrid HRA method being proposed. An overview of the literature review approach and high-level structure is provided in the first paper, whereas this paper presents the results of the review. The psychological literature review encompassed research spanning the entiretymore » of human cognition and performance, and consequently produced an extensive list of psychological processes, mechanisms, and factors that contribute to human performance. To make sense of this large amount of information, the results of the literature review were organized into a cognitive framework that identifies causes of failure of macrocognition in humans, and connects those proximate causes to psychological mechanisms and performance influencing factors (PIFs) that can lead to the failure. This cognitive framework can serve as a tool to inform HRA. Beyond this, however, the cognitive framework has the potential to also support addressing human performance issues identified in Human Factors applications.« less

  6. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less

  7. A mid-layer model for human reliability analysis : understanding the cognitive causes of human failure events.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.

    2010-03-01

    The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less

  8. Human Factors in Financial Trading

    PubMed Central

    Leaver, Meghan; Reader, Tom W.

    2016-01-01

    Objective This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Background Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors–related issues in operational trading incidents. Method In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Results Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors–related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. Conclusion We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. Application This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. PMID:27142394

  9. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  10. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  11. A Synthetic Vision Preliminary Integrated Safety Analysis

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Houser, Scott

    2001-01-01

    This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.

  12. Counting pollen grains using readily available, free image processing and analysis software.

    PubMed

    Costa, Clayton M; Yang, Suann

    2009-10-01

    Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.

  13. [Study of the relationship between human quality and reliability].

    PubMed

    Long, S; Wang, C; Wang, L i; Yuan, J; Liu, H; Jiao, X

    1997-02-01

    To clarify the relationship between human quality and reliability, 1925 experiments in 20 subjects were carried out to study the relationship between disposition character, digital memory, graphic memory, multi-reaction time and education level and simulated aircraft operation. Meanwhile, effects of task difficulty and enviromental factor on human reliability were also studied. The results showed that human quality can be predicted and evaluated through experimental methods. The better the human quality, the higher the human reliability.

  14. Validation of the Turkish Cervical Cancer and Human Papilloma Virus Awareness Questionnaire.

    PubMed

    Özdemir, E; Kısa, S

    2016-09-01

    The aim of this study was to determine the validity and reliability of the 'Cervical Cancer and Human Papilloma Virus Awareness Questionnaire' among fertility age women by adapting the scale into Turkish. Cervical cancer is the fourth most commonly form seen among women. Death from cervical cancer ranks third among causes and is one of the most preventable forms of cancer. This cross-sectional study included 360 women from three family health centres between January 5 and June 25, 2014. Internal consistency showed that the Kuder-Richardson 21 reliability coefficient in the first part was 0.60, Cronbach's alpha reliability coefficient was 0.61 in the second part. The Kaiser-Meyer-Olkin value of the items on the scale was 0.712. The Barlett test was significant. The confirmatory factor analysis indicated that the model matched the data adequately. This study shows that the Turkish version of the instrument is a valid and reliable tool to evaluate knowledge, perceptions and preventive behaviours of women regarding human papilloma virus and cervical cancer. Nurses who work in the clinical and primary care settings need to screen, detect and refer women who may be at risk from cervical cancer. © 2016 International Council of Nurses.

  15. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  16. 10 CFR 712.1 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program General Provisions § 712.1 Purpose. This part establishes the policies and procedures for a Human Reliability Program... judgment and reliability may be impaired by physical or mental/personality disorders, alcohol abuse, use of...

  17. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    PubMed

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  18. Space Mission Human Reliability Analysis (HRA) Project

    NASA Technical Reports Server (NTRS)

    Boyer, Roger

    2014-01-01

    The purpose of the Space Mission Human Reliability Analysis (HRA) Project is to extend current ground-based HRA risk prediction techniques to a long-duration, space-based tool. Ground-based HRA methodology has been shown to be a reasonable tool for short-duration space missions, such as Space Shuttle and lunar fly-bys. However, longer-duration deep-space missions, such as asteroid and Mars missions, will require the crew to be in space for as long as 400 to 900 day missions with periods of extended autonomy and self-sufficiency. Current indications show higher risk due to fatigue, physiological effects due to extended low gravity environments, and others, may impact HRA predictions. For this project, Safety & Mission Assurance (S&MA) will work with Human Health & Performance (HH&P) to establish what is currently used to assess human reliabiilty for human space programs, identify human performance factors that may be sensitive to long duration space flight, collect available historical data, and update current tools to account for performance shaping factors believed to be important to such missions. This effort will also contribute data to the Human Performance Data Repository and influence the Space Human Factors Engineering research risks and gaps (part of the HRP Program). An accurate risk predictor mitigates Loss of Crew (LOC) and Loss of Mission (LOM).The end result will be an updated HRA model that can effectively predict risk on long-duration missions.

  19. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    PubMed

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, p<0.001) and was sensitive to change following training in acute and mental health settings, across professional groups (p<0.001). Confirmatory factor analysis revealed an adequate model fit (RMSEA=0.066). The Human Factors Skills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  20. Multi-Unit Considerations for Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. Germain, S.; Boring, R.; Banaseanu, G.

    This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact themore » selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.« less

  1. Toward reliable characterization of functional homogeneity in the human brain: Preprocessing, scan duration, imaging resolution and computational space

    PubMed Central

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.

    2013-01-01

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497

  2. Optimal Measurement Conditions for Spatiotemporal EEG/MEG Source Analysis.

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Heslenfeld, Dirk J.; Molenaar, Peter C. M.

    2002-01-01

    Developed a method to determine the required number and position of sensors for human brain electromagnetic source analysis. Studied the method through a simulation study and an empirical study on visual evoked potentials in one adult male. Results indicate the method is fast and reliable and improves source precision. (SLD)

  3. NDE reliability and probability of detection (POD) evolution and paradigm shift

    NASA Astrophysics Data System (ADS)

    Singh, Surendra

    2014-02-01

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.

  4. Human Rating the Orion Parachute System

    NASA Technical Reports Server (NTRS)

    Machin, Ricardo A.; Fisher, Timothy E.; Evans, Carol T.; Stewart, Christine E.

    2011-01-01

    Human rating begins with design. Converging on the requirements and identifying the risks as early as possible in the design process is essential. Understanding of the interaction between the recovery system and the spacecraft will in large part dictate the achievable reliability of the final design. Component and complete system full-scale flight testing is critical to assure a realistic evaluation of the performance and reliability of the parachute system. However, because testing is so often difficult and expensive, comprehensive analysis of test results and correlation to accurate modeling completes the human rating process. The National Aeronautics and Space Administration (NASA) Orion program uses parachutes to stabilize and decelerate the Crew Exploration Vehicle (CEV) spacecraft during subsonic flight in order to deliver a safe water landing. This paper describes the approach that CEV Parachute Assembly System (CPAS) will take to human rate the parachute recovery system for the CEV.

  5. Quantifying Engagement: Measuring Player Involvement in Human-Avatar Interactions

    PubMed Central

    Norris, Anne E.; Weger, Harry; Bullinger, Cory; Bowers, Alyssa

    2014-01-01

    This research investigated the merits of using an established system for rating behavioral cues of involvement in human dyadic interactions (i.e., face-to-face conversation) to measure involvement in human-avatar interactions. Gameplay audio-video and self-report data from a Feasibility Trial and Free Choice study of an effective peer resistance skill building simulation game (DRAMA-RAMA™) were used to evaluate reliability and validity of the rating system when applied to human-avatar interactions. The Free Choice study used a revised game prototype that was altered to be more engaging. Both studies involved girls enrolled in a public middle school in Central Florida that served a predominately Hispanic (greater than 80%), low-income student population. Audio-video data were coded by two raters, trained in the rating system. Self-report data were generated using measures of perceived realism, predictability and flow administered immediately after game play. Hypotheses for reliability and validity were supported: Reliability values mirrored those found in the human dyadic interaction literature. Validity was supported by factor analysis, significantly higher levels of involvement in Free Choice as compared to Feasibility Trial players, and correlations between involvement dimension sub scores and self-report measures. Results have implications for the science of both skill-training intervention research and game design. PMID:24748718

  6. Adapting Human Reliability Analysis from Nuclear Power to Oil and Gas Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids

    2015-09-01

    ABSTRACT: Human reliability analysis (HRA), as currently used in risk assessments, largely derives its methods and guidance from application in the nuclear energy domain. While there are many similarities be-tween nuclear energy and other safety critical domains such as oil and gas, there remain clear differences. This paper provides an overview of HRA state of the practice in nuclear energy and then describes areas where refinements to the methods may be necessary to capture the operational context of oil and gas. Many key distinctions important to nuclear energy HRA such as Level 1 vs. Level 2 analysis may prove insignifi-cantmore » for oil and gas applications. On the other hand, existing HRA methods may not be sensitive enough to factors like the extensive use of digital controls in oil and gas. This paper provides an overview of these con-siderations to assist in the adaptation of existing nuclear-centered HRA methods to the petroleum sector.« less

  7. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  8. Sociotechnical attributes of safe and unsafe work systems.

    PubMed

    Kleiner, Brian M; Hettinger, Lawrence J; DeJoy, David M; Huang, Yuang-Hsiang; Love, Peter E D

    2015-01-01

    Theoretical and practical approaches to safety based on sociotechnical systems principles place heavy emphasis on the intersections between social-organisational and technical-work process factors. Within this perspective, work system design emphasises factors such as the joint optimisation of social and technical processes, a focus on reliable human-system performance and safety metrics as design and analysis criteria, the maintenance of a realistic and consistent set of safety objectives and policies, and regular access to the expertise and input of workers. We discuss three current approaches to the analysis and design of complex sociotechnical systems: human-systems integration, macroergonomics and safety climate. Each approach emphasises key sociotechnical systems themes, and each prescribes a more holistic perspective on work systems than do traditional theories and methods. We contrast these perspectives with historical precedents such as system safety and traditional human factors and ergonomics, and describe potential future directions for their application in research and practice. The identification of factors that can reliably distinguish between safe and unsafe work systems is an important concern for ergonomists and other safety professionals. This paper presents a variety of sociotechnical systems perspectives on intersections between social--organisational and technology--work process factors as they impact work system analysis, design and operation.

  9. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1990-01-01

    In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.

  10. Optimization of life support systems and their systems reliability

    NASA Technical Reports Server (NTRS)

    Fan, L. T.; Hwang, C. L.; Erickson, L. E.

    1971-01-01

    The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.

  11. A human reliability based usability evaluation method for safety-critical software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less

  12. Occupational Analysis for Human Resource Development: A Review of Utility of the Task Inventory. Research Report No. 25.

    ERIC Educational Resources Information Center

    Moore, Brian E.

    A review of the issues concerning the field of occupational analysis was undertaken in order to indicate the comparative strengths and weaknesses of the task inventory (TI). Specifically, the significance of the TI was assessed for reliability and validity, job analysis and evaluation, occupational restructuring and career ladder development, and…

  13. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    PubMed

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  14. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    PubMed Central

    Varikuti, Deepthi P.; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T.; Eickhoff, Simon B.

    2016-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that grey matter masking improved the reliability of connectivity estimates, whereas de-noising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources. PMID:27550015

  15. Some computational techniques for estimating human operator describing functions

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1986-01-01

    Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.

  16. Individual Differences in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. Joe; Ronald L. Boring

    2014-06-01

    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research hasmore » shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.« less

  17. Human figure drawings in the evaluation of severe adolescent suicidal behavior.

    PubMed

    Zalsman, G; Netanel, R; Fischel, T; Freudenstein, O; Landau, E; Orbach, I; Weizman, A; Pfeffer, C R; Apter, A

    2000-08-01

    To evaluate the reliability of using certain indicators derived from human figure drawings to distinguish between suicidal and nonsuicidal adolescents. Ninety consecutive admissions to an adolescent inpatient unit were assessed. Thirty-nine patients were admitted because of suicidal behavior and 51 for other reasons. All subjects were given the Human Figure Drawing (HFD) test. HFD was evaluated according to the method of Pfeffer and Richman, and the degree of suicidal behavior was rated by the Child Suicide Potential Scale. The internal reliability was satisfactory. HFD indicators correlated significantly with quantitative measures of suicidal behavior; of these indicators specifically, overall impression of the evaluator enabled the prediction of suicidal behavior and the distinction between suicidal and nonsuicidal inpatients (p < .001). A group of graphic indicators derived from a discriminant analysis formed a function, which was able to identify 84.6% of the suicidal and 76.6% of the nonsuicidal adolescents correctly. Many of the items had a regressive quality. The HFD is an example of a simple projective test that may have empirical reliability. It may be useful for the assessment of severe suicidal behavior in adolescents.

  18. Toward reliable characterization of functional homogeneity in the human brain: preprocessing, scan duration, imaging resolution and computational space.

    PubMed

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F Xavier; Milham, Michael P

    2013-01-15

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test-retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo's TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Ensemble variant interpretation methods to predict enzyme activity and assign pathogenicity in the CAGI4 NAGLU (Human N-acetyl-glucosaminidase) and UBE2I (Human SUMO-ligase) challenges.

    PubMed

    Yin, Yizhou; Kundu, Kunal; Pal, Lipika R; Moult, John

    2017-09-01

    CAGI (Critical Assessment of Genome Interpretation) conducts community experiments to determine the state of the art in relating genotype to phenotype. Here, we report results obtained using newly developed ensemble methods to address two CAGI4 challenges: enzyme activity for population missense variants found in NAGLU (Human N-acetyl-glucosaminidase) and random missense mutations in Human UBE2I (Human SUMO E2 ligase), assayed in a high-throughput competitive yeast complementation procedure. The ensemble methods are effective, ranked second for SUMO-ligase and third for NAGLU, according to the CAGI independent assessors. However, in common with other methods used in CAGI, there are large discrepancies between predicted and experimental activities for a subset of variants. Analysis of the structural context provides some insight into these. Post-challenge analysis shows that the ensemble methods are also effective at assigning pathogenicity for the NAGLU variants. In the clinic, providing an estimate of the reliability of pathogenic assignments is the key. We have also used the NAGLU dataset to show that ensemble methods have considerable potential for this task, and are already reliable enough for use with a subset of mutations. © 2017 Wiley Periodicals, Inc.

  20. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  1. Human reliability assessment: tools for law enforcement

    NASA Astrophysics Data System (ADS)

    Ryan, Thomas G.; Overlin, Trudy K.

    1997-01-01

    This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.

  2. Automating annotation of information-giving for analysis of clinical conversation.

    PubMed

    Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn

    2014-02-01

    Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.

  3. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  4. Reliability Analysis of Large Commercial Vessel Engine Room Automation Systems. Volume 1. Results

    DTIC Science & Technology

    1982-11-01

    analyzing the engine room automiations systems on two steam vessels and one diesel vessel, conducting a criticality evaluation, pre- paring...of automated engine room systems,° the effect of *. maintenance was also to be considered, as was the human inter- face and backup. Besides being...designed to replace the human element, the systems periorm more efficiently than the human watchstander. But as with any system, there is no such thing as

  5. An Empirical Research on the Correlation between Human Capital and Career Success of Knowledge Workers in Enterprise

    NASA Astrophysics Data System (ADS)

    Guo, Wenchen; Xiao, Hongjun; Yang, Xi

    Human capital plays an important part in employability of knowledge workers, also it is the important intangible assets of company. This paper explores the correlation between human capital and career success of knowledge workers. Based on literature retrieval, we identified measuring tool of career success and modified further; measuring human capital with self-developed scale of high reliability and validity. After exploratory factor analysis, we suggest that human capital contents four dimensions, including education, work experience, learning ability and training; career success contents three dimensions, including perceived internal competitiveness of organization, perceived external competitiveness of organization and career satisfaction. The result of empirical analysis indicates that there is a positive correlation between human capital and career success, and human capital is an excellent predictor of career success beyond demographics variables.

  6. Reliability of the Language ENvironment Analysis system (LENA™) in European French.

    PubMed

    Canault, Mélanie; Le Normand, Marie-Thérèse; Foudil, Samy; Loundon, Natalie; Thai-Van, Hung

    2016-09-01

    In this study, we examined the accuracy of the Language ENvironment Analysis (LENA) system in European French. LENA is a digital recording device with software that facilitates the collection and analysis of audio recordings from young children, providing automated measures of the speech overheard and produced by the child. Eighteen native French-speaking children, who were divided into six age groups ranging from 3 to 48 months old, were recorded about 10-16 h per day, three days a week. A total of 324 samples (six 10-min chunks of recordings) were selected and then transcribed according to the CHAT format. Simple and mixed linear models between the LENA and human adult word count (AWC) and child vocalization count (CVC) estimates were performed, to determine to what extent the automatic and the human methods agreed. Both the AWC and CVC estimates were very reliable (r = .64 and .71, respectively) for the 324 samples. When controlling the random factors of participants and recordings, 1 h was sufficient to obtain a reliable sample. It was, however, found that two age groups (7-12 months and 13-18 months) had a significant effect on the AWC data and that the second day of recording had a significant effect on the CVC data. When noise-related factors were added to the model, only a significant effect of signal-to-noise ratio was found on the AWC data. All of these findings and their clinical implications are discussed, providing strong support for the reliability of LENA in French.

  7. Reliability of functional and predictive methods to estimate the hip joint centre in human motion analysis in healthy adults.

    PubMed

    Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P

    2017-03-01

    In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Assessing the Quality of Academic Libraries on the Web: The Development and Testing of Criteria.

    ERIC Educational Resources Information Center

    Chao, Hungyune

    2002-01-01

    This study develops and tests an instrument useful for evaluating the quality of academic library Web sites. Discusses criteria for print materials and human-computer interfaces; user-based perspectives; the use of factor analysis; a survey of library experts; testing reliability through analysis of variance; and regression models. (Contains 53…

  9. The SACADA database for human reliability and human performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. James Chang; Dennis Bley; Lawrence Criscione

    2014-05-01

    Lack of appropriate and sufficient human performance data has been identified as a key factor affecting human reliability analysis (HRA) quality especially in the estimation of human error probability (HEP). The Scenario Authoring, Characterization, and Debriefing Application (SACADA) database was developed by the U.S. Nuclear Regulatory Commission (NRC) to address this data need. An agreement between NRC and the South Texas Project Nuclear Operating Company (STPNOC) was established to support the SACADA development with aims to make the SACADA tool suitable for implementation in the nuclear power plants' operator training program to collect operator performance information. The collected data wouldmore » support the STPNOC's operator training program and be shared with the NRC for improving HRA quality. This paper discusses the SACADA data taxonomy, the theoretical foundation, the prospective data to be generated from the SACADA raw data to inform human reliability and human performance, and the considerations on the use of simulator data for HRA. Each SACADA data point consists of two information segments: context and performance results. Context is a characterization of the performance challenges to task success. The performance results are the results of performing the task. The data taxonomy uses a macrocognitive functions model for the framework. At a high level, information is classified according to the macrocognitive functions of detecting the plant abnormality, understanding the abnormality, deciding the response plan, executing the response plan, and team related aspects (i.e., communication, teamwork, and supervision). The data are expected to be useful for analyzing the relations between context, error modes and error causes in human performance.« less

  10. Predicting QT prolongation in humans during early drug development using hERG inhibition and an anaesthetized guinea-pig model

    PubMed Central

    Yao, X; Anderson, D L; Ross, S A; Lang, D G; Desai, B Z; Cooper, D C; Wheelan, P; McIntyre, M S; Bergquist, M L; MacKenzie, K I; Becherer, J D; Hashim, M A

    2008-01-01

    Background and purpose: Drug-induced prolongation of the QT interval can lead to torsade de pointes, a life-threatening ventricular arrhythmia. Finding appropriate assays from among the plethora of options available to predict reliably this serious adverse effect in humans remains a challenging issue for the discovery and development of drugs. The purpose of the present study was to develop and verify a reliable and relatively simple approach for assessing, during preclinical development, the propensity of drugs to prolong the QT interval in humans. Experimental approach: Sixteen marketed drugs from various pharmacological classes with a known incidence—or lack thereof—of QT prolongation in humans were examined in hERG (human ether a-go-go-related gene) patch-clamp assay and an anaesthetized guinea-pig assay for QT prolongation using specific protocols. Drug concentrations in perfusates from hERG assays and plasma samples from guinea-pigs were determined using liquid chromatography-mass spectrometry. Key results: Various pharmacological agents that inhibit hERG currents prolong the QT interval in anaesthetized guinea-pigs in a manner similar to that seen in humans and at comparable drug exposures. Several compounds not associated with QT prolongation in humans failed to prolong the QT interval in this model. Conclusions and implications: Analysis of hERG inhibitory potency in conjunction with drug exposures and QT interval measurements in anaesthetized guinea-pigs can reliably predict, during preclinical drug development, the risk of human QT prolongation. A strategy is proposed for mitigating the risk of QT prolongation of new chemical entities during early lead optimization. PMID:18587422

  11. Interim reliability-evaluation program: analysis of the Browns Ferry, Unit 1, nuclear plant. Appendix B - system descriptions and fault trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, S.E.; Poloski, J.P.; Sullivan, W.H.

    1982-07-01

    This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less

  12. 10 CFR 712.19 - Removal from HRP.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program... immediately remove that individual from HRP duties pending a determination of the individual's reliability. A... HRP duties pending a determination of the individual's reliability is an interim, precautionary action...

  13. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  14. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  15. A Framework for Reliability and Safety Analysis of Complex Space Missions

    NASA Technical Reports Server (NTRS)

    Evans, John W.; Groen, Frank; Wang, Lui; Austin, Rebekah; Witulski, Art; Mahadevan, Nagabhushan; Cornford, Steven L.; Feather, Martin S.; Lindsey, Nancy

    2017-01-01

    Long duration and complex mission scenarios are characteristics of NASA's human exploration of Mars, and will provide unprecedented challenges. Systems reliability and safety will become increasingly demanding and management of uncertainty will be increasingly important. NASA's current pioneering strategy recognizes and relies upon assurance of crew and asset safety. In this regard, flexibility to develop and innovate in the emergence of new design environments and methodologies, encompassing modeling of complex systems, is essential to meet the challenges.

  16. Medicine is not science: guessing the future, predicting the past.

    PubMed

    Miller, Clifford

    2014-12-01

    Irregularity limits human ability to know, understand and predict. A better understanding of irregularity may improve the reliability of knowledge. Irregularity and its consequences for knowledge are considered. Reliable predictive empirical knowledge of the physical world has always been obtained by observation of regularities, without needing science or theory. Prediction from observational knowledge can remain reliable despite some theories based on it proving false. A naïve theory of irregularity is outlined. Reducing irregularity and/or increasing regularity can increase the reliability of knowledge. Beyond long experience and specialization, improvements include implementing supporting knowledge systems of libraries of appropriately classified prior cases and clinical histories and education about expertise, intuition and professional judgement. A consequence of irregularity and complexity is that classical reductionist science cannot provide reliable predictions of the behaviour of complex systems found in nature, including of the human body. Expertise, expert judgement and their exercise appear overarching. Diagnosis involves predicting the past will recur in the current patient applying expertise and intuition from knowledge and experience of previous cases and probabilistic medical theory. Treatment decisions are an educated guess about the future (prognosis). Benefits of the improvements suggested here are likely in fields where paucity of feedback for practitioners limits development of reliable expert diagnostic intuition. Further analysis, definition and classification of irregularity is appropriate. Observing and recording irregularities are initial steps in developing irregularity theory to improve the reliability and extent of knowledge, albeit some forms of irregularity present inherent difficulties. © 2014 John Wiley & Sons, Ltd.

  17. NDE reliability and probability of detection (POD) evolution and paradigm shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Surendra

    2014-02-18

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between “hoped for” versus validated or fielded failed hardware.« less

  18. Uncertainty characterization approaches for risk assessment of DBPs in drinking water: a review.

    PubMed

    Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James

    2009-04-01

    The management of risk from disinfection by-products (DBPs) in drinking water has become a critical issue over the last three decades. The areas of concern for risk management studies include (i) human health risk from DBPs, (ii) disinfection performance, (iii) technical feasibility (maintenance, management and operation) of treatment and disinfection approaches, and (iv) cost. Human health risk assessment is typically considered to be the most important phase of the risk-based decision-making or risk management studies. The factors associated with health risk assessment and other attributes are generally prone to considerable uncertainty. Probabilistic and non-probabilistic approaches have both been employed to characterize uncertainties associated with risk assessment. The probabilistic approaches include sampling-based methods (typically Monte Carlo simulation and stratified sampling) and asymptotic (approximate) reliability analysis (first- and second-order reliability methods). Non-probabilistic approaches include interval analysis, fuzzy set theory and possibility theory. However, it is generally accepted that no single method is suitable for the entire spectrum of problems encountered in uncertainty analyses for risk assessment. Each method has its own set of advantages and limitations. In this paper, the feasibility and limitations of different uncertainty analysis approaches are outlined for risk management studies of drinking water supply systems. The findings assist in the selection of suitable approaches for uncertainty analysis in risk management studies associated with DBPs and human health risk.

  19. Human alteration of the rural landscape: Variations in visual perception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloquell-Ballester, Vicente-Agustin, E-mail: cloquell@dpi.upv.es; Carmen Torres-Sibille, Ana del; Cloquell-Ballester, Victor-Andres

    2012-01-15

    The objective of this investigation is to evaluate how visual perception varies as the rural landscape is altered by human interventions of varying character. An experiment is carried out using Semantic Differential Analysis to analyse the effect of the character and the type of the intervention on perception. Interventions are divided into elements of 'permanent industrial character', 'elements of permanent rural character' and 'elements of temporary character', and these categories are sub-divided into smaller groups according to the type of development. To increase the reliability of the results, the Intraclass Correlation Coefficient tool, is applied to validate the semantic spacemore » of the perceptual responses and to determine the number of subjects required for a reliable evaluation of the scenes.« less

  20. Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)

    NASA Technical Reports Server (NTRS)

    DeMott, Diana L.; Bigler, Mark A.

    2017-01-01

    NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. To determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators, and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules, and operational requirements are developed and then finalized.

  1. Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Bigler, Mark

    2016-01-01

    NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.

  2. Effects of imperfect automation on decision making in a simulated command and control task.

    PubMed

    Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja

    2007-02-01

    Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.

  3. MR signal-fat-fraction analysis and T2* weighted imaging measure BAT reliably on humans without cold exposure.

    PubMed

    Holstila, Milja; Pesola, Marko; Saari, Teemu; Koskensalo, Kalle; Raiko, Juho; Borra, Ronald J H; Nuutila, Pirjo; Parkkola, Riitta; Virtanen, Kirsi A

    2017-05-01

    Brown adipose tissue (BAT) is compositionally distinct from white adipose tissue (WAT) in terms of triglyceride and water content. In adult humans, the most significant BAT depot is localized in the supraclavicular area. Our aim is to differentiate brown adipose tissue from white adipose tissue using fat T2* relaxation time mapping and signal-fat-fraction (SFF) analysis based on a commercially available modified 2-point-Dixon (mDixon) water-fat separation method. We hypothesize that magnetic resonance (MR) imaging can reliably measure BAT regardless of the cold-induced metabolic activation, with BAT having a significantly higher water and iron content compared to WAT. The supraclavicular area of 13 volunteers was studied on 3T PET-MRI scanner using T2* relaxation time and SFF mapping both during cold exposure and at ambient temperature; and 18 F-FDG PET during cold exposure. Volumes of interest (VOIs) were defined semiautomatically in the supraclavicular fat depot, subcutaneous WAT and muscle. The supraclavicular fat depot (assumed to contain BAT) had a significantly lower SFF and fat T2* relaxation time compared to subcutaneous WAT. Cold exposure did not significantly affect MR-based measurements. SFF and T2* values measured during cold exposure and at ambient temperature correlated inversely with the glucose uptake measured by 18 F-FDG PET. Human BAT can be reliably and safely assessed using MRI without cold activation and PET-related radiation exposure. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. [Development of a measurement of intellectual capital for hospital nursing organizations].

    PubMed

    Kim, Eun A; Jang, Keum Seong

    2011-02-01

    This study was done to develop an instrument for measuring intellectual capital and assess its validity and reliability in identifying the components, human capital, structure capital and customer capital of intellectual capital in hospital nursing organizations. The participants were 950 regular clinical nurses who had worked for over 13 months in 7 medical hospitals including 4 national university hospitals and 3 private university hospitals. The data were collected through a questionnaire survey done from July 2 to August 25, 2009. Data from 906 nurses were used for the final analysis. Data were analyzed using descriptive statistics, Cronbach's alpha coefficients, item analysis, factor analysis (principal component analysis, Varimax rotation) with the SPSS PC+ 17.0 for Windows program. Developing the instrument for measuring intellectual capital in hospital nursing organizations involved a literature review, development of preliminary items, and verification of validity and reliability. The final instrument was in a self-report form on a 5-point Likert scale. There were 29 items on human capital (5 domains), 21 items on customer capital (4 domains), 26 items on structure capital (4 domains). The results of this study may be useful to assess the levels of intellectual capital of hospital nursing organizations.

  5. Reliability of conditioned pain modulation: a systematic review

    PubMed Central

    Kennedy, Donna L.; Kemp, Harriet I.; Ridout, Deborah; Yarnitsky, David; Rice, Andrew S.C.

    2016-01-01

    Abstract A systematic literature review was undertaken to determine if conditioned pain modulation (CPM) is reliable. Longitudinal, English language observational studies of the repeatability of a CPM test paradigm in adult humans were included. Two independent reviewers assessed the risk of bias in 6 domains; study participation; study attrition; prognostic factor measurement; outcome measurement; confounding and analysis using the Quality in Prognosis Studies (QUIPS) critical assessment tool. Intraclass correlation coefficients (ICCs) less than 0.4 were considered to be poor; 0.4 and 0.59 to be fair; 0.6 and 0.75 good and greater than 0.75 excellent. Ten studies were included in the final review. Meta-analysis was not appropriate because of differences between studies. The intersession reliability of the CPM effect was investigated in 8 studies and reported as good (ICC = 0.6-0.75) in 3 studies and excellent (ICC > 0.75) in subgroups in 2 of those 3. The assessment of risk of bias demonstrated that reporting is not comprehensive for the description of sample demographics, recruitment strategy, and study attrition. The absence of blinding, a lack of control for confounding factors, and lack of standardisation in statistical analysis are common. Conditioned pain modulation is a reliable measure; however, the degree of reliability is heavily dependent on stimulation parameters and study methodology and this warrants consideration for investigators. The validation of CPM as a robust prognostic factor in experimental and clinical pain studies may be facilitated by improvements in the reporting of CPM reliability studies. PMID:27559835

  6. Bridging Human Reliability Analysis and Psychology, Part 1: The Psychological Literature Review for the IDHEAS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring

    In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously beenmore » a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.« less

  7. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  8. The temporal change in the cortical activations due to salty and sweet tastes in humans: fMRI and time-intensity sensory evaluation.

    PubMed

    Nakamura, Yuko; Goto, Tazuko K; Tokumori, Kenji; Yoshiura, Takashi; Kobayashi, Koji; Nakamura, Yasuhiko; Honda, Hiroshi; Ninomiya, Yuzo; Yoshiura, Kazunori

    2012-04-18

    It remains unclear how the cerebral cortex of humans perceives taste temporally, and whether or not such objective data about the brain show a correlation with the current widely used conventional methods of taste-intensity sensory evaluation. The aim of this study was to investigate the difference in the time-intensity profile between salty and sweet tastes in the human brain. The time-intensity profiles of functional MRI (fMRI) data of the human taste cortex were analyzed using finite impulse response analysis for a direct interpretation in terms of the peristimulus time signal. Also, time-intensity sensory evaluations for tastes were performed under the same condition as fMRI to confirm the reliability of the temporal profile in the fMRI data. The time-intensity profile for the brain activations due to a salty taste changed more rapidly than those due to a sweet taste in the human brain cortex and was also similar to the time-intensity sensory evaluation, confirming the reliability of the temporal profile of the fMRI data. In conclusion, the time-intensity profile using finite impulse response analysis for fMRI data showed that there was a temporal difference in the neural responses between salty and sweet tastes over a given period of time. This indicates that there might be taste-specific temporal profiles of activations in the human brain.

  9. A Canonical Correlation Analysis of AIDS Restriction Genes and Metabolic Pathways Identifies Purine Metabolism as a Key Cooperator.

    PubMed

    Ye, Hanhui; Yuan, Jinjin; Wang, Zhengwu; Huang, Aiqiong; Liu, Xiaolong; Han, Xiao; Chen, Yahong

    2016-01-01

    Human immunodeficiency virus causes a severe disease in humans, referred to as immune deficiency syndrome. Studies on the interaction between host genetic factors and the virus have revealed dozens of genes that impact diverse processes in the AIDS disease. To resolve more genetic factors related to AIDS, a canonical correlation analysis was used to determine the correlation between AIDS restriction and metabolic pathway gene expression. The results show that HIV-1 postentry cellular viral cofactors from AIDS restriction genes are coexpressed in human transcriptome microarray datasets. Further, the purine metabolism pathway comprises novel host factors that are coexpressed with AIDS restriction genes. Using a canonical correlation analysis for expression is a reliable approach to exploring the mechanism underlying AIDS.

  10. Launch and Assembly Reliability Analysis for Human Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Cates, Grant; Gelito, Justin; Stromgren, Chel; Cirillo, William; Goodliff, Kandyce

    2012-01-01

    NASA's future human space exploration strategy includes single and multi-launch missions to various destinations including cis-lunar space, near Earth objects such as asteroids, and ultimately Mars. Each campaign is being defined by Design Reference Missions (DRMs). Many of these missions are complex, requiring multiple launches and assembly of vehicles in orbit. Certain missions also have constrained departure windows to the destination. These factors raise concerns regarding the reliability of launching and assembling all required elements in time to support planned departure. This paper describes an integrated methodology for analyzing launch and assembly reliability in any single DRM or set of DRMs starting with flight hardware manufacturing and ending with final departure to the destination. A discrete event simulation is built for each DRM that includes the pertinent risk factors including, but not limited to: manufacturing completion; ground transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to trans-destination-injection. Each reliability factor can be selectively activated or deactivated so that the most critical risk factors can be identified. This enables NASA to prioritize mitigation actions so as to improve mission success.

  11. rpb2 is a reliable reference gene for quantitative gene expression analysis in the dermatophyte Trichophyton rubrum.

    PubMed

    Jacob, Tiago R; Peres, Nalu T A; Persinoti, Gabriela F; Silva, Larissa G; Mazucato, Mendelson; Rossi, Antonio; Martinez-Rossi, Nilce M

    2012-05-01

    The selection of reference genes used for data normalization to quantify gene expression by real-time PCR amplifications (qRT-PCR) is crucial for the accuracy of this technique. In spite of this, little information regarding such genes for qRT-PCR is available for gene expression analyses in pathogenic fungi. Thus, we investigated the suitability of eight candidate reference genes in isolates of the human dermatophyte Trichophyton rubrum subjected to several environmental challenges, such as drug exposure, interaction with human nail and skin, and heat stress. The stability of these genes was determined by geNorm, NormFinder and Best-Keeper programs. The gene with the most stable expression in the majority of the conditions tested was rpb2 (DNA-dependent RNA polymerase II), which was validated in three T. rubrum strains. Moreover, the combination of rpb2 and chs1 (chitin synthase) genes provided for the most reliable qRT-PCR data normalization in T. rubrum under a broad range of biological conditions. To the best of our knowledge this is the first report on the selection of reference genes for qRT-PCR data normalization in dermatophytes and the results of these studies should permit further analysis of gene expression under several experimental conditions, with improved accuracy and reliability.

  12. Inter-rater reliability for movement pattern analysis (MPA): measuring patterning of behaviors versus discrete behavior counts as indicators of decision-making style

    PubMed Central

    Connors, Brenda L.; Rende, Richard; Colton, Timothy J.

    2014-01-01

    The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns. PMID:24999336

  13. Inter-rater reliability for movement pattern analysis (MPA): measuring patterning of behaviors versus discrete behavior counts as indicators of decision-making style.

    PubMed

    Connors, Brenda L; Rende, Richard; Colton, Timothy J

    2014-01-01

    The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic - the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts - and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns.

  14. Nonlinear analysis of human physical activity patterns in health and disease.

    PubMed

    Paraschiv-Ionescu, A; Buchser, E; Rutschmann, B; Aminian, K

    2008-02-01

    The reliable and objective assessment of chronic disease state has been and still is a very significant challenge in clinical medicine. An essential feature of human behavior related to the health status, the functional capacity, and the quality of life is the physical activity during daily life. A common way to assess physical activity is to measure the quantity of body movement. Since human activity is controlled by various factors both extrinsic and intrinsic to the body, quantitative parameters only provide a partial assessment and do not allow for a clear distinction between normal and abnormal activity. In this paper, we propose a methodology for the analysis of human activity pattern based on the definition of different physical activity time series with the appropriate analysis methods. The temporal pattern of postures, movements, and transitions between postures was quantified using fractal analysis and symbolic dynamics statistics. The derived nonlinear metrics were able to discriminate patterns of daily activity generated from healthy and chronic pain states.

  15. An object-oriented approach to risk and reliability analysis : methodology and aviation safety applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane

    2003-09-01

    This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less

  16. Tutorial on use of intraclass correlation coefficients for assessing intertest reliability and its application in functional near-infrared spectroscopy-based brain imaging

    NASA Astrophysics Data System (ADS)

    Li, Lin; Zeng, Li; Lin, Zi-Jing; Cazzell, Mary; Liu, Hanli

    2015-05-01

    Test-retest reliability of neuroimaging measurements is an important concern in the investigation of cognitive functions in the human brain. To date, intraclass correlation coefficients (ICCs), originally used in inter-rater reliability studies in behavioral sciences, have become commonly used metrics in reliability studies on neuroimaging and functional near-infrared spectroscopy (fNIRS). However, as there are six popular forms of ICC, the adequateness of the comprehensive understanding of ICCs will affect how one may appropriately select, use, and interpret ICCs toward a reliability study. We first offer a brief review and tutorial on the statistical rationale of ICCs, including their underlying analysis of variance models and technical definitions, in the context of assessment on intertest reliability. Second, we provide general guidelines on the selection and interpretation of ICCs. Third, we illustrate the proposed approach by using an actual research study to assess intertest reliability of fNIRS-based, volumetric diffuse optical tomography of brain activities stimulated by a risk decision-making protocol. Last, special issues that may arise in reliability assessment using ICCs are discussed and solutions are suggested.

  17. Analysis of Light Emitting Diode Technology for Aerospace Suitability in Human Space Flight Applications

    NASA Astrophysics Data System (ADS)

    Treichel, Todd H.

    Commercial space designers are required to manage space flight designs in accordance with parts selections made from qualified parts listings approved by Department of Defense and NASA agencies for reliability and safety. The research problem was a government and private aerospace industry problem involving how LEDs cannot replace existing fluorescent lighting in manned space flight vehicles until such technology meets DOD and NASA requirements for reliability and safety, and effects on astronaut cognition and health. The purpose of this quantitative experimental study was to determine to what extent commercial LEDs can suitably meet NASA requirements for manufacturer reliability, color reliability, robustness to environmental test requirements, and degradation effects from operational power, while providing comfortable ambient light free of eyestrain to astronauts in lieu of current fluorescent lighting. A fractional factorial experiment tested white and blue LEDs for NASA required space flight environmental stress testing and applied operating current. The second phase of the study used a randomized block design, to test human factor effects of LEDs and a qualified ISS fluorescent for retinal fatigue and eye strain. Eighteen human subjects were recruited from university student members of the American Institute of Aeronautics and Astronautics. Findings for Phase 1 testing showed that commercial LEDs met all DOD and NASA requirements for manufacturer reliability, color reliability, robustness to environmental requirements, and degradation effects from operational power. Findings showed statistical significance for LED color and operational power variables but degraded light output levels did not fall below the industry recognized <70%. Findings from Phase 2 human factors testing showed no statistically significant evidence that the NASA approved ISS fluorescent lights or blue or white LEDs caused fatigue, eye strain and/or headache, when study participants perform detailed tasks of reading and assembling mechanical parts for an extended period of two uninterrupted hours. However, human subjects self-reported that blue LEDs provided the most white light and the favored light source over the white LED and the ISS fluorescent as a sole artificial light source for space travel. According to NASA standards, findings from this study indicate that LEDs meet criteria for the NASA TRL 7 rating, as study findings showed that commercial LED manufacturers passed the rigorous testing standards of suitability for space flight environments and human factor effects. Recommendations for future research include further testing for space flight using the basis of this study for replication, but reduce study limitations by 1) testing human subjects exposure to LEDs in a simulated space capsule environment over several days, and 2) installing and testing LEDs in space modules being tested for human spaceflight.

  18. RELIABLE ASSAYS FOR DETERMINING ENDOGENOUS COMPONENTS OF HUMAN MILK

    EPA Science Inventory

    Healthy women from 18-38 years old (N=25) fasted for several hours and twice donated blood and milk (postpartum 2-7 weeks and 3-4 months) for the EPA's Methods Advancement for Milk Analysis study, a pilot for the National Children's Study (NCS). Endogenous components were chosen...

  19. Computer Simulation of Human Behavior: Assessment of Creativity.

    ERIC Educational Resources Information Center

    Greene, John F.

    The major purpose of this study is to further the development of procedures which minimize current limitations of creativity instruments, thus yielding a reliable and functional means for assessing creativity. Computerized content analysis and multiple regression are employed to simulate the creativity ratings of trained judges. The computerized…

  20. Break-even Analysis: Tool for Budget Planning

    ERIC Educational Resources Information Center

    Lohmann, Roger A.

    1976-01-01

    Multiple funding creates special management problems for the administrator of a human service agency. This article presents a useful analytic technique adapted from business practice that can help the administrator draw up and balance a unified budget. Such a budget also affords reliable overview of the agency's financial status. (Author)

  1. Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety

    NASA Astrophysics Data System (ADS)

    Mikula, J. F. Kip

    2005-12-01

    This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.

  2. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.

  3. Reliable LC3 and p62 autophagy marker detection in formalin fixed paraffin embedded human tissue by immunohistochemistry.

    PubMed

    Schläfli, A M; Berezowska, S; Adams, O; Langer, R; Tschan, M P

    2015-05-05

    Autophagy assures cellular homeostasis, and gains increasing importance in cancer, where it impacts on carcinogenesis, propagation of the malignant phenotype and development of resistance. To date, its tissue-based analysis by immunohistochemistry remains poorly standardized. Here we show the feasibility of specifically and reliably assessing the autophagy markers LC3B and p62 (SQSTM1) in formalin fixed and paraffin embedded human tissue by immunohistochemistry. Preceding functional experiments consisted of depleting LC3B and p62 in H1299 lung cancer cells with subsequent induction of autophagy. Western blot and immunofluorescence validated antibody specificity, knockdown efficiency and autophagy induction prior to fixation in formalin and embedding in paraffin. LC3B and p62 antibodies were validated on formalin fixed and paraffin embedded cell pellets of treated and control cells and finally applied on a tissue microarray with 80 human malignant and non-neoplastic lung and stomach formalin fixed and paraffin embedded tissue samples. Dot-like staining of various degrees was observed in cell pellets and 18/40 (LC3B) and 22/40 (p62) tumors, respectively. Seventeen tumors were double positive for LC3B and p62. P62 displayed additional significant cytoplasmic and nuclear staining of unknown significance. Interobserver-agreement for grading of staining intensities and patterns was substantial to excellent (kappa values 0.60 - 0.83). In summary, we present a specific and reliable IHC staining of LC3B and p62 on formalin fixed and paraffin embedded human tissue. Our presented protocol is designed to aid reliable investigation of dysregulated autophagy in solid tumors and may be used on large tissue collectives.

  4. 10 CFR 712.12 - HRP implementation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...) Report any observed or reported behavior or condition of another HRP-certified individual that could indicate a reliability concern, including those behaviors and conditions listed in § 712.13(c), to a...

  5. Wavelet analysis of epileptic spikes

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-05-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  6. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  7. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  8. Muscle synergies during bench press are reliable across days.

    PubMed

    Kristiansen, Mathias; Samani, Afshin; Madeleine, Pascal; Hansen, Ernst Albin

    2016-10-01

    Muscle synergies have been investigated during different types of human movement using nonnegative matrix factorization. However, there are not any reports available on the reliability of the method. To evaluate between-day reliability, 21 subjects performed bench press, in two test sessions separated by approximately 7days. The movement consisted of 3 sets of 8 repetitions at 60% of the three repetition maximum in bench press. Muscle synergies were extracted from electromyography data of 13 muscles, using nonnegative matrix factorization. To evaluate between-day reliability, we performed a cross-correlation analysis and a cross-validation analysis, in which the synergy components extracted in the first test session were recomputed, using the fixed synergy components from the second test session. Two muscle synergies accounted for >90% of the total variance, and reflected the concentric and eccentric phase, respectively. The cross-correlation values were strong to very strong (r-values between 0.58 and 0.89), while the cross-validation values ranged from substantial to almost perfect (ICC3, 1 values between 0.70 and 0.95). The present findings revealed that the same general structure of the muscle synergies was present across days and the extraction of muscle synergies is thus deemed reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The quality of evidence of psychometric properties of three-dimensional spinal posture-measuring instruments

    PubMed Central

    2011-01-01

    Background Psychometric properties include validity, reliability and sensitivity to change. Establishing the psychometric properties of an instrument which measures three-dimensional human posture are essential prior to applying it in clinical practice or research. Methods This paper reports the findings of a systematic literature review which aimed to 1) identify non-invasive three-dimensional (3D) human posture-measuring instruments; and 2) assess the quality of reporting of the methodological procedures undertaken to establish their psychometric properties, using a purpose-build critical appraisal tool. Results Seventeen instruments were identified, of which nine were supported by research into psychometric properties. Eleven and six papers respectively, reported on validity and reliability testing. Rater qualification and reference standards were generally poorly addressed, and there was variable quality reporting of rater blinding and statistical analysis. Conclusions There is a lack of current research to establish the psychometric properties of non-invasive 3D human posture-measuring instruments. PMID:21569486

  10. Technology-assisted risk of bias assessment in systematic reviews: a prospective cross-sectional evaluation of the RobotReviewer machine learning tool.

    PubMed

    Gates, Allison; Vandermeer, Ben; Hartling, Lisa

    2018-04-01

    To evaluate the reliability of RobotReviewer's risk of bias judgments. In this prospective cross-sectional evaluation, we used RobotReviewer to assess risk of bias among 1,180 trials. We computed reliability with human reviewers using Cohen's kappa coefficient and calculated sensitivity and specificity. We investigated differences in reliability by risk of bias domain, topic, and outcome type using the chi-square test in meta-analysis. Reliability (95% CI) was moderate for random sequence generation (0.48 [0.43, 0.53]), allocation concealment (0.45 [0.40, 0.51]), and blinding of participants and personnel (0.42 [0.36, 0.47]); fair for overall risk of bias (0.34 [0.25, 0.44]); and slight for blinding of outcome assessors (0.10 [0.06, 0.14]), incomplete outcome data (0.14 [0.08, 0.19]), and selective reporting (0.02 [-0.02, 0.05]). Reliability for blinding of participants and personnel (P < 0.001), blinding of outcome assessors (P = 0.005), selective reporting (P < 0.001), and overall risk of bias (P < 0.001) differed by topic. Sensitivity and specificity (95% CI) ranged from 0.20 (0.18, 0.23) to 0.76 (0.72, 0.80) and from 0.61 (0.56, 0.65) to 0.95 (0.93, 0.96), respectively. Risk of bias appraisal is subjective. Compared with reliability between author groups, RobotReviewer's reliability with human reviewers was similar for most domains and better for allocation concealment, blinding of participants and personnel, and overall risk of bias. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Launch and Assembly Reliability Analysis for Mars Human Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.; Stromgren, Chel; Cirillo, William M.; Goodliff, Kandyce E.

    2013-01-01

    NASA s long-range goal is focused upon human exploration of Mars. Missions to Mars will require campaigns of multiple launches to assemble Mars Transfer Vehicles in Earth orbit. Launch campaigns are subject to delays, launch vehicles can fail to place their payloads into the required orbit, and spacecraft may fail during the assembly process or while loitering prior to the Trans-Mars Injection (TMI) burn. Additionally, missions to Mars have constrained departure windows lasting approximately sixty days that repeat approximately every two years. Ensuring high reliability of launching and assembling all required elements in time to support the TMI window will be a key enabler to mission success. This paper describes an integrated methodology for analyzing and improving the reliability of the launch and assembly campaign phase. A discrete event simulation involves several pertinent risk factors including, but not limited to: manufacturing completion; transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to TMI. The model accommodates varying numbers of launches, including the potential for spare launches. Having a spare launch capability provides significant improvement to mission success.

  12. Blood Groups in the Species Survival Plan®, European Endangered Species Program, and Managed in situ Populations of Bonobo (Pan paniscus), Common Chimpanzee (Pan troglodytes), Gorilla (Gorilla ssp.), and Orangutan (Pongo pygmaeus ssp.)

    PubMed Central

    Gamble, Kathryn C.; Moyse, Jill A.; Lovstad, Jessica N.; Ober, Carole B.; Thompson, Emma E.

    2014-01-01

    Blood groups of humans and great apes long have been considered similar although are not interchangeable between species. In this study, human monoclonal antibody technology was used to assign human ABO blood groups to whole blood samples from great apes housed in North American and European zoos and in situ managed populations, as a practical means to assist blood transfusion situations for these species. From a subset of each of the species (bonobo, common chimpanzee, gorilla, and orangutans), DNA sequence analysis was performed to determine blood group genotype. Bonobo and common chimpanzee populations were predominantly group A which concurred with historic literature and was confirmed by genotyping. In agreement with historic literature, a smaller number of the common chimpanzees sampled were group O although this O blood group was more often present in wild-origin animals as compared to zoo-born animals. Gorilla blood groups were inconclusive by monoclonal antibody techniques and by genetic studies were inconsistent with any known human blood group. As the genus and specifically the Bornean species, orangutans were identified with all human blood groups, including O, which had not been reported previously. Following this study, it was concluded that blood groups of bonobo, common chimpanzees, and some orangutans can be reliably assessed by human monoclonal antibody technology. However, this technique was not reliable for gorilla or orangutans other than those with blood group A. Even in those species with reliable blood group detection, blood transfusion preparation must include cross-matching to minimize adverse reactions for the patient. PMID:20853409

  13. Design Development Test and Evaluation (DDT and E) Considerations for Safe and Reliable Human Rated Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Miller, James; Leggett, Jay; Kramer-White, Julie

    2008-01-01

    A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy.

  14. Theoretical relationship between vibration transmissibility and driving-point response functions of the human body.

    PubMed

    Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z

    2013-11-25

    The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.

  15. CREATING A DECISION CONTEXT FOR COMPARATIVE ANALYSIS AND CONSISTENT APPLICATION OF INHALATION DOSIMETRY MODELS IN CHILDREN'S RISK ASSESSMENT

    EPA Science Inventory

    Estimation of risks to children from exposure to airborne pollutants is often complicated by the lack of reliable epidemiological data specific to this age group. As a result, risks are generally estimated from extrapolations based on data obtained in other human age groups (e.g....

  16. Analysis of Readability and Interest of Marketing Education Textbooks: Implications for Special Needs Learners.

    ERIC Educational Resources Information Center

    Jones, Karen H.; And Others

    1993-01-01

    The readability, reading ease, interest level, and writing style of 20 current textbooks in secondary marketing education were evaluated. Readability formulas consistently identified lower reading levels for special needs education, human interest scores were not very reliable information sources, and writing style was also a weak variable. (JOW)

  17. Direct dating of human fossils.

    PubMed

    Grün, Rainer

    2006-01-01

    The methods that can be used for the direct dating of human remains comprise of radiocarbon, U-series, electron spin resonance (ESR), and amino acid racemization (AAR). This review gives an introduction to these methods in the context of dating human bones and teeth. Recent advances in ultrafiltration techniques have expanded the dating range of radiocarbon. It now seems feasible to reliably date bones up to 55,000 years. New developments in laser ablation mass spectrometry permit the in situ analysis of U-series isotopes, thus providing a rapid and virtually non-destructive dating method back to about 300,000 years. This is of particular importance when used in conjunction with non-destructive ESR analysis. New approaches in AAR analysis may lead to a renaissance of this method. The potential and present limitations of these direct dating techniques are discussed for sites relevant to the reconstruction of modern human evolution, including Florisbad, Border Cave, Tabun, Skhul, Qafzeh, Vindija, Banyoles, and Lake Mungo. (c) 2006 Wiley-Liss, Inc.

  18. In silico analysis of protein toxin and bacteriocins from Lactobacillus paracasei SD1 genome and available online databases

    PubMed Central

    Surachat, Komwit; Sangket, Unitsa; Deachamag, Panchalika; Chotigeat, Wilaiwan

    2017-01-01

    Lactobacillus paracasei SD1 is a potential probiotic strain due to its ability to survive several conditions in human dental cavities. To ascertain its safety for human use, we therefore performed a comprehensive bioinformatics analysis and characterization of the bacterial protein toxins produced by this strain. We report the complete genome of Lactobacillus paracasei SD1 and its comparison to other Lactobacillus genomes. Additionally, we identify and analyze its protein toxins and antimicrobial proteins using reliable online database resources and establish its phylogenetic relationship with other bacterial genomes. Our investigation suggests that this strain is safe for human use and contains several bacteriocins that confer health benefits to the host. An in silico analysis of protein-protein interactions between the target bacteriocins and the microbial proteins gtfB and luxS of Streptococcus mutans was performed and is discussed here. PMID:28837656

  19. Are We Hoping For A Bounce A Study On Resilience And Human Relations In A High Reliability Organization

    DTIC Science & Technology

    2016-03-01

    A BOUNCE? A STUDY ON RESILIENCE AND HUMAN RELATIONS IN A HIGH RELIABILITY ORGANIZATION by Robert D. Johns March 2016 Thesis Advisor...RELATIONS IN A HIGH RELIABILITY ORGANIZATION 5. FUNDING NUMBERS 6. AUTHOR(S) Robert D. Johns 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...200 words) This study analyzes the various resilience factors associated with a military high reliability organization (HRO). The data measuring

  20. Specimen preparation for NanoSIMS analysis of biological materials

    NASA Astrophysics Data System (ADS)

    Grovenor, C. R. M.; Smart, K. E.; Kilburn, M. R.; Shore, B.; Dilworth, J. R.; Martin, B.; Hawes, C.; Rickaby, R. E. M.

    2006-07-01

    In order to achieve reliable and reproducible analysis of biological materials by SIMS, it is critical both that the chosen specimen preparation method does not modify substantially the in vivo chemistry that is the focus of the study and that any chemical information obtained can be calibrated accurately by selection of appropriate standards. In Oxford, we have been working with our new Cameca NanoSIMS50 on two very distinct classes of biological materials; the first where the sample preparation problems are relatively undemanding - human hair - but calibration for trace metal analysis is a critical issue and, the second, marine coccoliths and hyperaccumulator plants where reliable specimen preparation by rapid freezing and controlled drying to preserve the distribution of diffusible species is the first and most demanding requirement, but worthwhile experiments on tracking key elements can still be undertaken even when it is clear that some redistribution of the most diffusible ions has occurred.

  1. Reliability of human-supervised formant-trajectory measurement for forensic voice comparison.

    PubMed

    Zhang, Cuiling; Morrison, Geoffrey Stewart; Ochoa, Felipe; Enzinger, Ewald

    2013-01-01

    Acoustic-phonetic approaches to forensic voice comparison often include human-supervised measurement of vowel formants, but the reliability of such measurements is a matter of concern. This study assesses the within- and between-supervisor variability of three sets of formant-trajectory measurements made by each of four human supervisors. It also assesses the validity and reliability of forensic-voice-comparison systems based on these measurements. Each supervisor's formant-trajectory system was fused with a baseline mel-frequency cepstral-coefficient system, and performance was assessed relative to the baseline system. Substantial improvements in validity were found for all supervisors' systems, but some supervisors' systems were more reliable than others.

  2. Neural Signatures of Trust During Human-Automation Interactions

    DTIC Science & Technology

    2016-04-01

    magnetic resonance imaging by manipulating the reliability of advice from a human or automated luggage inspector framed as experts. HAT and HHT were...human-human trust, human-automation trust, brain, functional magnetic resonance imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...behavioral X-ray luggage-screening task with functional magnetic resonance imaging (fMRI) and manipulated reliabilities of advice (unknown to the

  3. Comparison of methods for profiling O-glycosylation: Human Proteome Organisation Human Disease Glycomics/Proteome Initiative multi-institutional study of IgA1.

    PubMed

    Wada, Yoshinao; Dell, Anne; Haslam, Stuart M; Tissot, Bérangère; Canis, Kévin; Azadi, Parastoo; Bäckström, Malin; Costello, Catherine E; Hansson, Gunnar C; Hiki, Yoshiyuki; Ishihara, Mayumi; Ito, Hiromi; Kakehi, Kazuaki; Karlsson, Niclas; Hayes, Catherine E; Kato, Koichi; Kawasaki, Nana; Khoo, Kay-Hooi; Kobayashi, Kunihiko; Kolarich, Daniel; Kondo, Akihiro; Lebrilla, Carlito; Nakano, Miyako; Narimatsu, Hisashi; Novak, Jan; Novotny, Milos V; Ohno, Erina; Packer, Nicolle H; Palaima, Elizabeth; Renfrow, Matthew B; Tajiri, Michiko; Thomsson, Kristina A; Yagi, Hirokazu; Yu, Shin-Yi; Taniguchi, Naoyuki

    2010-04-01

    The Human Proteome Organisation Human Disease Glycomics/Proteome Initiative recently coordinated a multi-institutional study that evaluated methodologies that are widely used for defining the N-glycan content in glycoproteins. The study convincingly endorsed mass spectrometry as the technique of choice for glycomic profiling in the discovery phase of diagnostic research. The present study reports the extension of the Human Disease Glycomics/Proteome Initiative's activities to an assessment of the methodologies currently used for O-glycan analysis. Three samples of IgA1 isolated from the serum of patients with multiple myeloma were distributed to 15 laboratories worldwide for O-glycomics analysis. A variety of mass spectrometric and chromatographic procedures representative of current methodologies were used. Similar to the previous N-glycan study, the results convincingly confirmed the pre-eminent performance of MS for O-glycan profiling. Two general strategies were found to give the most reliable data, namely direct MS analysis of mixtures of permethylated reduced glycans in the positive ion mode and analysis of native reduced glycans in the negative ion mode using LC-MS approaches. In addition, mass spectrometric methodologies to analyze O-glycopeptides were also successful.

  4. First trimester size charts of embryonic brain structures.

    PubMed

    Gijtenbeek, M; Bogers, H; Groenenberg, I A L; Exalto, N; Willemsen, S P; Steegers, E A P; Eilers, P H C; Steegers-Theunissen, R P M

    2014-02-01

    Can reliable size charts of human embryonic brain structures be created from three-dimensional ultrasound (3D-US) visualizations? Reliable size charts of human embryonic brain structures can be created from high-quality images. Previous studies on the visualization of both the cavities and the walls of the brain compartments were performed using 2D-US, 3D-US or invasive intrauterine sonography. However, the walls of the diencephalon, mesencephalon and telencephalon have not been measured non-invasively before. Last-decade improvements in transvaginal ultrasound techniques allow a better visualization and offer the tools to measure these human embryonic brain structures with precision. This study is embedded in a prospective periconceptional cohort study. A total of 141 pregnancies were included before the sixth week of gestation and were monitored until delivery to assess complications and adverse outcomes. For the analysis of embryonic growth, 596 3D-US scans encompassing the entire embryo were obtained from 106 singleton non-malformed live birth pregnancies between 7(+0) and 12(+6) weeks' gestational age (GA). Using 4D View (3D software) the measured embryonic brain structures comprised thickness of the diencephalon, mesencephalon and telencephalon, and the total diameter of the diencephalon and mesencephalon. Of 596 3D scans, 161 (27%) high-quality scans of 79 pregnancies were eligible for analysis. The reliability of all embryonic brain structure measurements, based on the intra-class correlation coefficients (ICCs) (all above 0.98), was excellent. Bland-Altman plots showed moderate agreement for measurements of the telencephalon, but for all other measurements the agreement was good. Size charts were constructed according to crown-rump length (CRL). The percentage of high-quality scans suitable for analysis of these brain structures was low (27%).  The size charts of human embryonic brain structures can be used to study normal and abnormal development of brain development in future. Also, the effects of periconceptional maternal exposures, such as folic acid supplement use and smoking, on human embryonic brain development can be a topic of future research. This study was supported by the Department of Obstetrics and Gynaecology of the Erasmus University Medical Center. M.G. was supported by an additional grant from the Sophia Foundation for Medical Research (SSWO grant number 644). No competing interests are declared.

  5. Using a dry electrode EEG device during balance tasks in healthy young-adult males: Test-retest reliability analysis.

    PubMed

    Collado-Mateo, Daniel; Adsuar, Jose C; Olivares, Pedro R; Cano-Plasencia, Ricardo; Gusi, Narcis

    2015-01-01

    The analysis of brain activity during balance is an important topic in different fields of science. Given that all measurements involve an error that is caused by different agents, like the instrument, the researcher, or the natural human variability, a test-retest reliability evaluation of the electroencephalographic assessment is a needed starting point. However, there is a lack of information about the reliability of electroencephalographic measurements, especially in a new wireless device with dry electrodes. The current study aims to analyze the reliability of electroencephalographic measurements from a wireless device using dry electrodes during two different balance tests. Seventeen healthy male volunteers performed two different static balance tasks on a Biodex Balance Platform: (a) with two feet on the platform and (b) with one foot on the platform. Electroencephalographic data was recorded using Enobio (Neuroelectrics). The mean power spectrum of the alpha band of the central and frontal channels was calculated. Relative and absolute indices of reliability were also calculated. In general terms, the intraclass correlation coefficient (ICC) values of all the assessed channels can be classified as excellent (>0.90). The percentage standard error of measurement oscillated from 0.54% to 1.02% and the percentage smallest real difference ranged from 1.50% to 2.82%. Electroencephalographic assessment through an Enobio device during balance tasks has an excellent reliability. However, its utility was not demonstrated because responsiveness was not assessed.

  6. Development and preliminary evidence for the validity of an instrument assessing implementation of human-factors principles in medication-related decision-support systems—I-MeDeSA

    PubMed Central

    Zachariah, Marianne; Seidling, Hanna M; Neri, Pamela M; Cresswell, Kathrin M; Duke, Jon; Bloomrosen, Meryl; Volk, Lynn A; Bates, David W

    2011-01-01

    Background Medication-related decision support can reduce the frequency of preventable adverse drug events. However, the design of current medication alerts often results in alert fatigue and high over-ride rates, thus reducing any potential benefits. Methods The authors previously reviewed human-factors principles for relevance to medication-related decision support alerts. In this study, instrument items were developed for assessing the appropriate implementation of these human-factors principles in drug–drug interaction (DDI) alerts. User feedback regarding nine electronic medical records was considered during the development process. Content validity, construct validity through correlation analysis, and inter-rater reliability were assessed. Results The final version of the instrument included 26 items associated with nine human-factors principles. Content validation on three systems resulted in the addition of one principle (Corrective Actions) to the instrument and the elimination of eight items. Additionally, the wording of eight items was altered. Correlation analysis suggests a direct relationship between system age and performance of DDI alerts (p=0.0016). Inter-rater reliability indicated substantial agreement between raters (κ=0.764). Conclusion The authors developed and gathered preliminary evidence for the validity of an instrument that measures the appropriate use of human-factors principles in the design and display of DDI alerts. Designers of DDI alerts may use the instrument to improve usability and increase user acceptance of medication alerts, and organizations selecting an electronic medical record may find the instrument helpful in meeting their clinicians' usability needs. PMID:21946241

  7. Human- and computer-accessible 2D correlation data for a more reliable structure determination of organic compounds. Future roles of researchers, software developers, spectrometer managers, journal editors, reviewers, publisher and database managers toward artificial-intelligence analysis of NMR spectra.

    PubMed

    Jeannerat, Damien

    2017-01-01

    The introduction of a universal data format to report the correlation data of 2D NMR spectra such as COSY, HSQC and HMBC spectra will have a large impact on the reliability of structure determination of small organic molecules. These lists of assigned cross peaks will bridge signals found in NMR 1D and 2D spectra and the assigned chemical structure. The record could be very compact, human and computer readable so that it can be included in the supplementary material of publications and easily transferred into databases of scientific literature and chemical compounds. The records will allow authors, reviewers and future users to test the consistency and, in favorable situations, the uniqueness of the assignment of the correlation data to the associated chemical structures. Ideally, the data format of the correlation data should include direct links to the NMR spectra to make it possible to validate their reliability and allow direct comparison of spectra. In order to take the full benefits of their potential, the correlation data and the NMR spectra should therefore follow any manuscript in the review process and be stored in open-access database after publication. Keeping all NMR spectra, correlation data and assigned structures together at all time will allow the future development of validation tools increasing the reliability of past and future NMR data. This will facilitate the development of artificial intelligence analysis of NMR spectra by providing a source of data than can be used efficiently because they have been validated or can be validated by future users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  9. Test-Retest Reliability of Graph Metrics in Functional Brain Networks: A Resting-State fNIRS Study

    PubMed Central

    Niu, Haijing; Li, Zhen; Liao, Xuhong; Wang, Jinhui; Zhao, Tengda; Shu, Ni; Zhao, Xiaohu; He, Yong

    2013-01-01

    Recent research has demonstrated the feasibility of combining functional near-infrared spectroscopy (fNIRS) and graph theory approaches to explore the topological attributes of human brain networks. However, the test-retest (TRT) reliability of the application of graph metrics to these networks remains to be elucidated. Here, we used resting-state fNIRS and a graph-theoretical approach to systematically address TRT reliability as it applies to various features of human brain networks, including functional connectivity, global network metrics and regional nodal centrality metrics. Eighteen subjects participated in two resting-state fNIRS scan sessions held ∼20 min apart. Functional brain networks were constructed for each subject by computing temporal correlations on three types of hemoglobin concentration information (HbO, HbR, and HbT). This was followed by a graph-theoretical analysis, and then an intraclass correlation coefficient (ICC) was further applied to quantify the TRT reliability of each network metric. We observed that a large proportion of resting-state functional connections (∼90%) exhibited good reliability (0.6< ICC <0.74). For global and nodal measures, reliability was generally threshold-sensitive and varied among both network metrics and hemoglobin concentration signals. Specifically, the majority of global metrics exhibited fair to excellent reliability, with notably higher ICC values for the clustering coefficient (HbO: 0.76; HbR: 0.78; HbT: 0.53) and global efficiency (HbO: 0.76; HbR: 0.70; HbT: 0.78). Similarly, both nodal degree and efficiency measures also showed fair to excellent reliability across nodes (degree: 0.52∼0.84; efficiency: 0.50∼0.84); reliability was concordant across HbO, HbR and HbT and was significantly higher than that of nodal betweenness (0.28∼0.68). Together, our results suggest that most graph-theoretical network metrics derived from fNIRS are TRT reliable and can be used effectively for brain network research. This study also provides important guidance on the choice of network metrics of interest for future applied research in developmental and clinical neuroscience. PMID:24039763

  10. Human error and commercial aviation accidents: an analysis using the human factors analysis and classification system.

    PubMed

    Shappell, Scott; Detwiler, Cristy; Holcomb, Kali; Hackworth, Carla; Boquet, Albert; Wiegmann, Douglas A

    2007-04-01

    The aim of this study was to extend previous examinations of aviation accidents to include specific aircrew, environmental, supervisory, and organizational factors associated with two types of commercial aviation (air carrier and commuter/ on-demand) accidents using the Human Factors Analysis and Classification System (HFACS). HFACS is a theoretically based tool for investigating and analyzing human error associated with accidents and incidents. Previous research has shown that HFACS can be reliably used to identify human factors trends associated with military and general aviation accidents. Using data obtained from both the National Transportation Safety Board and the Federal Aviation Administration, 6 pilot-raters classified aircrew, supervisory, organizational, and environmental causal factors associated with 1020 commercial aviation accidents that occurred over a 13-year period. The majority of accident causal factors were attributed to aircrew and the environment, with decidedly fewer associated with supervisory and organizational causes. Comparisons were made between HFACS causal categories and traditional situational variables such as visual conditions, injury severity, and regional differences. These data will provide support for the continuation, modification, and/or development of interventions aimed at commercial aviation safety. HFACS provides a tool for assessing human factors associated with accidents and incidents.

  11. Attribute Ratings and Profiles of the Job Elements of the Position Analysis Questionnaire (PAQ).

    DTIC Science & Technology

    Questionnaire ( PAQ ). A secondary purpose was to explore the reliability of job-related ratings as a function of the number of raters. A taxonomy of 76...human attributes was used, and ratings of the relevance of these attributes to each of the PAQ job elements were obtained. A minimum of 8 raters and

  12. Near-Earth Phase Risk Comparison of Human Mars Campaign Architectures

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Nejad, Hamed S.; Mattenberger, Chris

    2013-01-01

    A risk analysis of the launch, orbital assembly, and Earth-departure phases of human Mars exploration campaign architectures was completed as an extension of a probabilistic risk assessment (PRA) originally carried out under the NASA Constellation Program Ares V Project. The objective of the updated analysis was to study the sensitivity of loss-of-campaign risk to such architectural factors as composition of the propellant delivery portion of the launch vehicle fleet (Ares V heavy-lift launch vehicle vs. smaller/cheaper commercial launchers) and the degree of launcher or Mars-bound spacecraft element sparing. Both a static PRA analysis and a dynamic, event-based Monte Carlo simulation were developed and used to evaluate the probability of loss of campaign under different sparing options. Results showed that with no sparing, loss-of-campaign risk is strongly driven by launcher count and on-orbit loiter duration, favoring an all-Ares V launch approach. Further, the reliability of the all-Ares V architecture showed significant improvement with the addition of a single spare launcher/payload. Among architectures utilizing a mix of Ares V and commercial launchers, those that minimized the on-orbit loiter duration of Mars-bound elements were found to exceed the reliability of no spare all-Ares V campaign if unlimited commercial vehicle sparing was assumed

  13. Psychometric Properties of the Serbian Version of the Maslach Burnout Inventory-Human Services Survey: A Validation Study among Anesthesiologists from Belgrade Teaching Hospitals

    PubMed Central

    Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A.

    2015-01-01

    We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1–22). The results of Bartlett's test of sphericity (χ 2 = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS. PMID:26090517

  14. Psychometric Properties of the Serbian Version of the Maslach Burnout Inventory-Human Services Survey: A Validation Study among Anesthesiologists from Belgrade Teaching Hospitals.

    PubMed

    Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A

    2015-01-01

    We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1-22). The results of Bartlett's test of sphericity (χ(2) = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS.

  15. A comparison of computer-assisted and manual wound size measurement.

    PubMed

    Thawer, Habiba A; Houghton, Pamela E; Woodbury, M Gail; Keast, David; Campbell, Karen

    2002-10-01

    Accurate and precise wound measurements are a critical component of every wound assessment. To examine the reliability and validity of a new computerized technique for measuring human and animal wounds, chronic human wounds (N = 45) and surgical animal wounds (N = 38) were assessed using manual and computerized techniques. Using intraclass correlation coefficients, intrarater and interrater reliability of surface area measurements obtained using the computerized technique were compared to those obtained using acetate tracings and planimetry. A single measurement of surface area using either technique produced excellent intrarater and interrater reliability for both human and animal wounds, but the computerized technique was more precise than the manual technique for measuring the surface area of animal wounds. For both types of wounds and measurement techniques, intrarater and interrater reliability improved when the average of three repeated measurements was obtained. The precision of each technique with human wounds and the precision of the manual technique with animal wounds also improved when three repeated measurement results were averaged. Concurrent validity between the two techniques was excellent for human wounds but poor for the smaller animal wounds, regardless of whether single or the average of three repeated surface area measurements was used. The computerized technique permits reliable and valid assessment of the surface area of both human and animal wounds.

  16. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  17. Comparison of Urban Human Movements Inferring from Multi-Source Spatial-Temporal Data

    NASA Astrophysics Data System (ADS)

    Cao, Rui; Tu, Wei; Cao, Jinzhou; Li, Qingquan

    2016-06-01

    The quantification of human movements is very hard because of the sparsity of traditional data and the labour intensive of the data collecting process. Recently, much spatial-temporal data give us an opportunity to observe human movement. This research investigates the relationship of city-wide human movements inferring from two types of spatial-temporal data at traffic analysis zone (TAZ) level. The first type of human movement is inferred from long-time smart card transaction data recording the boarding actions. The second type of human movement is extracted from citywide time sequenced mobile phone data with 30 minutes interval. Travel volume, travel distance and travel time are used to measure aggregated human movements in the city. To further examine the relationship between the two types of inferred movements, the linear correlation analysis is conducted on the hourly travel volume. The obtained results show that human movements inferred from smart card data and mobile phone data have a correlation of 0.635. However, there are still some non-ignorable differences in some special areas. This research not only reveals the citywide spatial-temporal human dynamic but also benefits the understanding of the reliability of the inference of human movements with big spatial-temporal data.

  18. Retest reliability of individual alpha ERD topography assessed by human electroencephalography.

    PubMed

    Vázquez-Marrufo, Manuel; Galvao-Carmona, Alejandro; Benítez Lugo, María Luisa; Ruíz-Peña, Juan Luis; Borges Guerra, Mónica; Izquierdo Ayuso, Guillermo

    2017-01-01

    Despite the immense literature related to diverse human electroencephalographic (EEG) parameters, very few studies have focused on the reliability of these measures. Some of the most studied components (i.e., P3 or MMN) have received more attention regarding the stability of their main parameters, such as latency, amplitude or topography. However, spectral modulations have not been as extensively evaluated considering that different analysis methods are available. The main aim of the present study is to assess the reliability of the latency, amplitude and topography of event-related desynchronization (ERD) for the alpha band (10-14 Hz) observed in a cognitive task (visual oddball). Topography reliability was analysed at different levels (for the group, within-subjects individually and between-subjects individually). The latency for alpha ERD showed stable behaviour between two sessions, and the amplitude exhibited an increment (more negative) in the second session. Alpha ERD topography exhibited a high correlation score between sessions at the group level (r = 0.903, p<0.001). The mean value for within-subject correlations was 0.750 (with a range from 0.391 to 0.954). Regarding between-subject topography comparisons, some subjects showed a highly specific topography, whereas other subjects showed topographies that were more similar to those of other subjects. ERD was mainly stable between the two sessions with the exception of amplitude, which exhibited an increment in the second session. Topography exhibits excellent reliability at the group level; however, it exhibits highly heterogeneous behaviour at the individual level. Considering that the P3 was previously evaluated for this group of subjects, a direct comparison of the correlation scores was possible, and it showed that the ERD component is less reliable in individual topography than in the ERP component (P3).

  19. Retest reliability of individual alpha ERD topography assessed by human electroencephalography

    PubMed Central

    Vázquez-Marrufo, Manuel; Benítez Lugo, María Luisa; Ruíz-Peña, Juan Luis; Borges Guerra, Mónica; Izquierdo Ayuso, Guillermo

    2017-01-01

    Background Despite the immense literature related to diverse human electroencephalographic (EEG) parameters, very few studies have focused on the reliability of these measures. Some of the most studied components (i.e., P3 or MMN) have received more attention regarding the stability of their main parameters, such as latency, amplitude or topography. However, spectral modulations have not been as extensively evaluated considering that different analysis methods are available. The main aim of the present study is to assess the reliability of the latency, amplitude and topography of event-related desynchronization (ERD) for the alpha band (10–14 Hz) observed in a cognitive task (visual oddball). Topography reliability was analysed at different levels (for the group, within-subjects individually and between-subjects individually). Results The latency for alpha ERD showed stable behaviour between two sessions, and the amplitude exhibited an increment (more negative) in the second session. Alpha ERD topography exhibited a high correlation score between sessions at the group level (r = 0.903, p<0.001). The mean value for within-subject correlations was 0.750 (with a range from 0.391 to 0.954). Regarding between-subject topography comparisons, some subjects showed a highly specific topography, whereas other subjects showed topographies that were more similar to those of other subjects. Conclusion ERD was mainly stable between the two sessions with the exception of amplitude, which exhibited an increment in the second session. Topography exhibits excellent reliability at the group level; however, it exhibits highly heterogeneous behaviour at the individual level. Considering that the P3 was previously evaluated for this group of subjects, a direct comparison of the correlation scores was possible, and it showed that the ERD component is less reliable in individual topography than in the ERP component (P3). PMID:29088307

  20. Identifying homologous anatomical landmarks on reconstructed magnetic resonance images of the human cerebral cortical surface

    PubMed Central

    MAUDGIL, D. D.; FREE, S. L.; SISODIYA, S. M.; LEMIEUX, L.; WOERMANN, F. G.; FISH, D. R.; SHORVON, S. D.

    1998-01-01

    Guided by a review of the anatomical literature, 36 sulci on the human cerebral cortical surface were designated as homologous. These sulci were assessed for visibility on 3-dimensional images reconstructed from magnetic resonance imaging scans of the brains of 20 normal volunteers by 2 independent observers. Those sulci that were found to be reproducibly identifiable were used to define 24 landmarks around the cortical surface. The interobserver and intraobserver variabilities of measurement of the 24 landmarks were calculated. These reliably reproducible landmarks can be used for detailed morphometric analysis, and may prove helpful in the analysis of suspected cerebral cortical structured abnormalities in patients with such conditions as epilepsy. PMID:10029189

  1. Determination of exposure multiples of human metabolites for MIST assessment in preclinical safety species without using reference standards or radiolabeled compounds.

    PubMed

    Ma, Shuguang; Li, Zhiling; Lee, Keun-Joong; Chowdhury, Swapan K

    2010-12-20

    A simple, reliable, and accurate method was developed for quantitative assessment of metabolite coverage in preclinical safety species by mixing equal volumes of human plasma with blank plasma of animal species and vice versa followed by an analysis using high-resolution full-scan accurate mass spectrometry. This approach provided comparable results (within (±15%) to those obtained from regulated bioanalysis and did not require synthetic standards or radiolabeled compounds. In addition, both qualitative and quantitative data were obtained from a single LC-MS analysis on all metabolites and, therefore, the coverage of any metabolite of interest can be obtained.

  2. Initial description of a quantitative, cross-species (chimpanzee-human) social responsiveness measure

    PubMed Central

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve; Constantino, John; Povinelli, Daniel; Pruett, John R.

    2011-01-01

    Objective Comparative studies of social responsiveness, an ability that is impaired in autistic spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species (human-chimpanzee) social responsiveness measure. Method We translated the Social Responsiveness Scale (SRS), an instrument that quantifies human social responsiveness, into an analogous instrument for chimpanzees. We then retranslated this "Chimp SRS" into a human "Cross-Species SRS" (XSRS). We evaluated three groups of chimpanzees (n=29) with the Chimp SRS and typical and autistic spectrum disorder (ASD) human children (n=20) with the XSRS. Results The Chimp SRS demonstrated strong inter-rater reliability at the three sites (ranges for individual ICCs: .534–.866 and mean ICCs: .851–.970). As has been observed in humans, exploratory principal components analysis of Chimp SRS scores supports a single factor underlying chimpanzee social responsiveness. Human subjects' XSRS scores were fully concordant with their SRS scores (r=.976, p=.001) and distinguished appropriately between typical and ASD subjects. One chimpanzee known for inappropriate social behavior displayed a significantly higher score than all other chimpanzees at its site, demonstrating the scale's ability to detect impaired social responsiveness in chimpanzees. Conclusion Our initial cross-species social responsiveness scale proved reliable and discriminated differences in social responsiveness across (in a relative sense) and within (in a more objectively quantifiable manner) humans and chimpanzees. PMID:21515200

  3. In-Vivo Human Skin to Textiles Friction Measurements

    NASA Astrophysics Data System (ADS)

    Pfarr, Lukas; Zagar, Bernhard

    2017-10-01

    We report on a measurement system to determine highly reliable and accurate friction properties of textiles as needed for example as input to garment simulation software. Our investigations led to a set-up that allows to characterize not just textile to textile but also textile to in-vivo human skin tribological properties and thus to fundamental knowledge about genuine wearer interaction in garments. The method of test conveyed in this paper is measuring concurrently and in a highly time resolved manner the normal force as well as the resulting shear force caused by a friction subject intending to slide out of the static friction regime and into the dynamic regime on a test bench. Deeper analysis of various influences is enabled by extending the simple model following Coulomb's law for rigid body friction to include further essential parameters such as contact force, predominance in the yarn's orientation and also skin hydration. This easy-to-use system enables to measure reliably and reproducibly both static and dynamic friction for a variety of friction partners including human skin with all its variability there might be.

  4. Psychometric evaluation of the Shared Decision-Making Instrument--Revised.

    PubMed

    Bartlett, Jacqueline A; Peterson, Jane A

    2013-02-01

    The purpose of this study was to evaluate the psychometric properties of the Shared Decision-Making Inventory-Revised (SDMI-R) to measure four constructs (knowledge, attitudes, self-efficacy, and intent) theoretically defined as vital in discussing the human papillomavirus (HPV) disease and vaccine with clients. The SDMI-R was distributed to a sample (N = 1,525) of school nurses. Correlational matrixes denoted moderate to strong correlations, indicating adequate internal reliability. Reliability for the total instrument was satisfactory (α = .874) along with Attitude, Self-Efficacy and Intent subscales .828, .917, .891, respectively. Exploratory factor analysis revealed five components that explained 75.96% of the variance.

  5. NASA human factors programmatic overview

    NASA Technical Reports Server (NTRS)

    Connors, Mary M.

    1992-01-01

    Human factors addresses humans in their active and interactive capacities, i.e., in the mental and physical activities that they perform and in the contributions they make to achieving the goals of the mission. The overall goal of space human factors in NASA is to support the safety, productivity, and reliability of both the on-board crew and the ground support staff. Safety and reliability are fundamental requirements that human factors shares with other disciplines, while productivity represents the defining contribution of the human factors discipline.

  6. Validity and reliability of the persian version of templer death anxiety scale in family caregivers of cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Bahrami, Nasim; Yaghoobzadeh, Ameneh; Banihashemi, Hedieh; Nia, Hamid Sharif; Haghdoost, Ali Akbar

    2016-01-01

    Due to increasing recognition of the importance of death anxiety for understanding human nature, it is important that researchers who investigate death anxiety have reliable and valid methodology to measure. The purpose of this study was to evaluate the validity and reliability of the Persian version of Templer Death Anxiety Scale (TDAS) in family caregivers of cancer patients. A sample of 326 caregivers of cancer patients completed a 15-item questionnaire. Principal components analysis (PCA) followed by a varimax rotation was used to assess factor structure of the DAS. The construct validity of the scale was assessed using exploratory and confirmatory factor analyses. Convergent and discriminant validity were also examined. Reliability was assessed with Cronbach's alpha coefficients and construction reliability. Based on the results of the PCA and consideration of the meaning of our items, a three-factor solution, explaining 60.38% of the variance, was identified. A confirmatory factor analysis (CFA) then supported the adequacy of the three-domain structure of the DAS. Goodness-of-fit indices showed an acceptable fit overall with the full model {χ(2)(df) = 262.32 (61), χ(2)/df = 2.04 [adjusted goodness of fit index (AGFI) = 0.922, parsimonious comparative fit index (PCFI) = 0.703, normed fit Index (NFI) = 0.912, CMIN/DF = 2.048, root mean square error of approximation (RMSEA) = 0.055]}. Convergent and discriminant validity were shown with construct fulfilled. The Cronbach's alpha and construct reliability were greater than 0.70. The findings show that the Persian version of the TDAS has a three-factor structure and acceptable validity and reliability.

  7. A 17-month time course study of human RNA and DNA degradation in body fluids under dry and humid environmental conditions.

    PubMed

    Sirker, Miriam; Schneider, Peter M; Gomes, Iva

    2016-11-01

    Blood, saliva, and semen are some of the forensically most relevant biological stains commonly found at crime scenes, which can often be of small size or challenging due to advanced decay. In this context, it is of great importance to possess reliable knowledge about the effects of degradation under different environmental conditions and to use appropriate methods for retrieving maximal information from limited sample amount. In the last decade, RNA analysis has been demonstrated to be a reliable approach identifying the cell or tissue type of an evidentiary body fluid trace. Hence, messenger RNA (mRNA) profiling is going to be implemented into forensic casework to supplement the routinely performed short tandem repeat (STR) analysis, and therefore, the ability to co-isolate RNA and DNA from the same sample is a prerequisite. The objective of this work was to monitor and compare the degradation process of both nucleic acids for human blood, saliva, and semen stains at three different concentrations, exposed to dry and humid conditions during a 17-month time period. This study also addressed the question whether there are relevant differences in the efficiency of automated, magnetic bead-based single DNA or RNA extraction methods compared to a manually performed co-extraction method using silica columns. Our data show that mRNA, especially from blood and semen, can be recovered over the entire time period surveyed without compromising the success of DNA profiling; mRNA analysis indicates to be a robust and reliable technique to identify the biological source of aged stain material. The co-extraction method appears to provide mRNA and DNA of sufficient quantity and quality for all different forensic investigation procedures. Humidity and accompanied mold formation are detrimental to both nucleic acids.

  8. Quasi-targeted analysis of hydroxylation-related metabolites of polycyclic aromatic hydrocarbons in human urine by liquid chromatography-mass spectrometry.

    PubMed

    Tang, Caiming; Tan, Jianhua; Fan, Ruifang; Zhao, Bo; Tang, Caixing; Ou, Weihui; Jin, Jiabin; Peng, Xianzhi

    2016-08-26

    Metabolite identification is crucial for revealing metabolic pathways and comprehensive potential toxicities of polycyclic aromatic hydrocarbons (PAHs) in human body. In this work, a quasi-targeted analysis strategy was proposed for metabolite identification of monohydroxylated polycyclic aromatic hydrocarbons (OH-PAHs) in human urine using liquid chromatography triple quadruple mass spectrometry (LC-QqQ-MS/MS) combined with liquid chromatography high resolution mass spectrometry (LC-HRMS). Potential metabolites of OH-PAHs were preliminarily screened out by LC-QqQ-MS/MS in association with filtering in a self-constructed information list of possible metabolites, followed by further identification and confirmation with LC-HRMS. The developed method can provide more reliable and systematic results compared with traditional untargeted analysis using LC-HRMS. In addition, data processing for LC-HRMS analysis were greatly simplified. This quasi-targeted analysis method was successfully applied to identifying phase I and phase II metabolites of OH-PAHs in human urine. Five metabolites of hydroxynaphthalene, seven of hydroxyfluorene, four of hydroxyphenanthrene, and three of hydroxypyrene were tentatively identified. Metabolic pathways of PAHs in human body were putatively revealed based on the identified metabolites. The experimental results will be valuable for investigating the metabolic processes of PAHs in human body, and the quasi-targeted analysis strategy can be expanded to the metabolite identification and profiling of other compounds in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Comparison of manual sleep staging with automated neural network-based analysis in clinical practice.

    PubMed

    Caffarel, Jennifer; Gibson, G John; Harrison, J Phil; Griffiths, Clive J; Drinnan, Michael J

    2006-03-01

    We have compared sleep staging by an automated neural network (ANN) system, BioSleep (Oxford BioSignals) and a human scorer using the Rechtschaffen and Kales scoring system. Sleep study recordings from 114 patients with suspected obstructed sleep apnoea syndrome (OSA) were analysed by ANN and by a blinded human scorer. We also examined human scorer reliability by calculating the agreement between the index scorer and a second independent blinded scorer for 28 of the 114 studies. For each study, we built contingency tables on an epoch-by-epoch (30 s epochs) comparison basis. From these, we derived kappa (kappa) coefficients for different combinations of sleep stages. The overall agreement of automatic and manual scoring for the 114 studies for the classification {wake / light-sleep / deep-sleep / REM} was poor (median kappa = 0.305) and only a little better (kappa = 0.449) for the crude {wake / sleep} distinction. For the subgroup of 28 randomly selected studies, the overall agreement of automatic and manual scoring was again relatively low (kappa = 0.331 for {wake light-sleep / deep-sleep REM} and kappa = 0.505 for {wake / sleep}), whereas inter-scorer reliability was higher (kappa = -0.641 for {wake / light-sleep / deep-sleep / REM} and kappa = 0.737 for {wake / sleep}). We conclude that such an ANN-based analysis system is not sufficiently accurate for sleep study analyses using the R&K classification system.

  10. HRA Aerospace Challenges

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2013-01-01

    Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.

  11. Just Culture: A Foundation for Balanced Accountability and Patient Safety

    PubMed Central

    Boysen, Philip G.

    2013-01-01

    Background The framework of a just culture ensures balanced accountability for both individuals and the organization responsible for designing and improving systems in the workplace. Engineering principles and human factors analysis influence the design of these systems so they are safe and reliable. Methods Approaches for improving patient safety introduced here are (1) analysis of error, (2) specific tools to enhance safety, and (3) outcome engineering. Conclusion The just culture is a learning culture that is constantly improving and oriented toward patient safety. PMID:24052772

  12. Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions

    PubMed Central

    Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander

    2014-01-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945

  13. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  14. [The estimation of possibilities for the application of the laser capture microdissection technology for the molecular-genetic expert analysis (genotyping) of human chromosomal DNA].

    PubMed

    Ivanov, P L; Leonov, S N; Zemskova, E Iu

    2012-01-01

    The present study was designed to estimate the possibilities of application of the laser capture microdissection (LCM) technology for the molecular-genetic expert analysis (genotyping) of human chromosomal DNA. The experimental method employed for the purpose was the multiplex multilocus analysis of autosomal DNA polymorphism in the preparations of buccal epitheliocytes obtained by LCM. The key principles of the study were the application of physical methods for contrast enhancement of the micropreparations (such as phase-contrast microscopy and dark-field microscopy) and PCR-compatible cell lysis. Genotyping was carried out with the use of AmpFISTR Minifiler TM PCR Amplification Kits ("Applied Biosynthesis", USA). It was shown that the technique employed in the present study ensures reliable genotyping of human chromosomal DNA in the pooled preparations containing 10-20 dissected diploid cells each. This result fairly well agrees with the calculated sensitivity of the method. A few practical recommendations are offered.

  15. Analysis of short-chain fatty acids in human feces: A scoping review.

    PubMed

    Primec, Maša; Mičetić-Turk, Dušanka; Langerholc, Tomaž

    2017-06-01

    Short-chain fatty acids (SCFAs) play a crucial role in maintaining homeostasis in humans, therefore the importance of a good and reliable SCFAs analytical detection has raised a lot in the past few years. The aim of this scoping review is to show the trends in the development of different methods of SCFAs analysis in feces, based on the literature published in the last eleven years in all major indexing databases. The search criteria included analytical quantification techniques of SCFAs in different human clinical and in vivo studies. SCFAs analysis is still predominantly performed using gas chromatography (GC), followed by high performance liquid chromatography (HPLC), nuclear magnetic resonance (NMR) and capillary electrophoresis (CE). Performances, drawbacks and advantages of these methods are discussed, especially in the light of choosing a proper pretreatment, as feces is a complex biological material. Further optimization to develop a simple, cost effective and robust method for routine use is needed. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Identification of cardiomyocyte nuclei and assessment of ploidy for the analysis of cell turnover.

    PubMed

    Bergmann, Olaf; Zdunek, Sofia; Alkass, Kanar; Druid, Henrik; Bernard, Samuel; Frisén, Jonas

    2011-01-15

    Assays to quantify myocardial renewal rely on the accurate identification of cardiomyocyte nuclei. We previously ¹⁴C birth dated human cardiomyocytes based on the nuclear localization of cTroponins T and I. A recent report by Kajstura et al. suggested that cTroponin I is only localized to the nucleus in a senescent subpopulation of cardiomyocytes, implying that ¹⁴C birth dating of cTroponin T and I positive cell populations underestimates cardiomyocyte renewal in humans. We show here that the isolation of cell nuclei from the heart by flow cytometry with antibodies against cardiac Troponins T and I, as well as pericentriolar material 1 (PCM-1), allows for isolation of close to all cardiomyocyte nuclei, based on ploidy and marker expression. We also present a reassessment of cardiomyocyte ploidy, which has important implications for the analysis of cell turnover, and iododeoxyuridine (IdU) incorporation data. These data provide the foundation for reliable analysis of cardiomyocyte turnover in humans. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  18. Psychometric Assessment of the Injection Pen Assessment Questionnaire (IPAQ): measuring ease of use and preference with injection pens for human growth hormone

    PubMed Central

    2012-01-01

    Purpose To examine the psychometric properties of the Injection Pen Assessment Questionnaire (IPAQ) including the following: 1) item and scale characteristics (e.g., frequencies, item distributions, and factor structure), 2) reliability, and 3) validity. Methods Focus groups and one-on-one dyad interviews guided the development of the IPAQ. The IPAQ was subsequently tested in 136 parent–child dyads in a Phase 3, 2-month, open-label, multicenter trial for a new Genotropin® disposable pen. Factor analysis was performed to inform the development of a scoring algorithm, and reliability and validity of the IPAQ were evaluated using the data from this two months study. Psychometric analyses were conducted separately for each injection pen. Results Confirmatory factor analysis provides evidence supporting a second order factor solution for four subscales and a total IPAQ score. These factor analysis results support the conceptual framework developed from previous qualitative research in patient dyads using the reusable pen. However, the IPAQ subscales did not consistently meet acceptable internal consistency reliability for some group level comparisons. Cronbach’s alphas for the total IPAQ score for both pens were 0.85, exceeding acceptable levels of reliability for group comparisons. Conclusions The total IPAQ score is a useful measure for evaluating ease of use and preference for injection pens in clinical trials among patient dyads receiving hGH. The psychometric properties of the individual subscales, mainly the lower internal consistency reliability of some of the subscales and the predictive validity findings, do not support the use of subscale scores alone as a primary endpoint. PMID:23046797

  19. Working Up a Good Sweat – The Challenges of Standardising Sweat Collection for Metabolomics Analysis

    PubMed Central

    Hussain, Joy N; Mantri, Nitin; Cohen, Marc M

    2017-01-01

    Introduction Human sweat is a complex biofluid of interest to diverse scientific fields. Metabolomics analysis of sweat promises to improve screening, diagnosis and self-monitoring of numerous conditions through new applications and greater personalisation of medical interventions. Before these applications can be fully developed, existing methods for the collection, handling, processing and storage of human sweat need to be revised. This review presents a cross-disciplinary overview of the origins, composition, physical characteristics and functional roles of human sweat, and explores the factors involved in standardising sweat collection for metabolomics analysis. Methods A literature review of human sweat analysis over the past 10 years (2006–2016) was performed to identify studies with metabolomics or similarly applicable ‘omics’ analysis. These studies were reviewed with attention to sweat induction and sampling techniques, timing of sweat collection, sweat storage conditions, laboratory derivation, processing and analytical platforms. Results Comparative analysis of 20 studies revealed numerous factors that can significantly impact the validity, reliability and reproducibility of sweat analysis including: anatomical site of sweat sampling, skin integrity and preparation; temperature and humidity at the sweat collection sites; timing and nature of sweat collection; metabolic quenching; transport and storage; qualitative and quantitative measurements of the skin microbiota at sweat collection sites; and individual variables such as diet, emotional state, metabolic conditions, pharmaceutical, recreational drug and supplement use. Conclusion Further development of standard operating protocols for human sweat collection can open the way for sweat metabolomics to significantly add to our understanding of human physiology in health and disease. PMID:28798503

  20. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    PubMed

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  1. Fuzzy risk analysis of a modern γ-ray industrial irradiator.

    PubMed

    Castiglia, F; Giardina, M

    2011-06-01

    Fuzzy fault tree analyses were used to investigate accident scenarios that involve radiological exposure to operators working in industrial γ-ray irradiation facilities. The HEART method, a first generation human reliability analysis method, was used to evaluate the probability of adverse human error in these analyses. This technique was modified on the basis of fuzzy set theory to more directly take into account the uncertainties in the error-promoting factors on which the methodology is based. Moreover, with regard to some identified accident scenarios, fuzzy radiological exposure risk, expressed in terms of potential annual death, was evaluated. The calculated fuzzy risks for the examined plant were determined to be well below the reference risk suggested by International Commission on Radiological Protection.

  2. Agreement Between Face-to-Face and Free Software Video Analysis for Assessing Hamstring Flexibility in Adolescents.

    PubMed

    Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique

    2015-09-01

    The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up.

  3. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  4. Reference values of elements in human hair: a systematic review.

    PubMed

    Mikulewicz, Marcin; Chojnacka, Katarzyna; Gedrange, Thomas; Górecki, Henryk

    2013-11-01

    The lack of systematic review on reference values of elements in human hair with the consideration of methodological approach. The absence of worldwide accepted and implemented universal reference ranges causes that hair mineral analysis has not become yet a reliable and useful method of assessment of nutritional status and exposure of individuals. Systematic review of reference values of elements in human hair. PubMed, ISI Web of Knowledge, Scopus. Humans, hair mineral analysis, elements or minerals, reference values, original studies. The number of studies screened and assessed for eligibility was 52. Eventually, included in the review were 5 papers. The studies report reference ranges for the content of elements in hair: macroelements, microelements, toxic elements and other elements. Reference ranges were elaborated for different populations in the years 2000-2012. The analytical methodology differed, in particular sample preparation, digestion and analysis (ICP-AES, ICP-MS). Consequently, the levels of hair minerals reported as reference values varied. It is necessary to elaborate the standard procedures and furtherly validate hair mineral analysis and deliver detailed methodology. Only then it would be possible to provide meaningful reference ranges and take advantage of the potential that lies in Hair Mineral Analysis as a medical diagnostic technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Managing unexpected events in the manufacturing of biologic medicines.

    PubMed

    Grampp, Gustavo; Ramanan, Sundar

    2013-08-01

    The manufacturing of biologic medicines (biologics) requires robust process and facility design, rigorous regulatory compliance, and a well-trained workforce. Because of the complex attributes of biologics and their sensitivity to production and handling conditions, manufacturing of these medicines also requires a high-reliability manufacturing organization. As required by regulators, such an organization must monitor the state-of-control for the manufacturing process. A high-reliability organization also invests in an experienced and fully engaged technical support staff and fosters a management culture that rewards in-depth analysis of unexpected results, robust risk assessments, and timely and effective implementation of mitigation measures. Such a combination of infrastructure, technology, human capital, management, and a science-based operations culture does not occur without a strong organizational and financial commitment. These attributes of a high-reliability biologics manufacturer are difficult to achieve and may be differentiating factors as the supply of biologics diversifies in future years.

  6. Initial description of a quantitative, cross-species (chimpanzee-human) social responsiveness measure.

    PubMed

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve E; Constantino, John N; Povinelli, Daniel J; Pruett, John R

    2011-05-01

    Comparative studies of social responsiveness, an ability that is impaired in autism spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species (human-chimpanzee) social responsiveness measure. We translated the Social Responsiveness Scale (SRS), an instrument that quantifies human social responsiveness, into an analogous instrument for chimpanzees. We then retranslated this "Chimpanzee SRS" into a human "Cross-Species SRS" (XSRS). We evaluated three groups of chimpanzees (n = 29) with the Chimpanzee SRS and typical and human children with autism spectrum disorder (ASD; n = 20) with the XSRS. The Chimpanzee SRS demonstrated strong interrater reliability at the three sites (ranges for individual ICCs: 0.534 to 0.866; mean ICCs: 0.851 to 0.970). As has been observed in human beings, exploratory principal components analysis of Chimpanzee SRS scores supports a single factor underlying chimpanzee social responsiveness. Human subjects' XSRS scores were fully concordant with their SRS scores (r = 0.976, p = .001) and distinguished appropriately between typical and ASD subjects. One chimpanzee known for inappropriate social behavior displayed a significantly higher score than all other chimpanzees at its site, demonstrating the scale's ability to detect impaired social responsiveness in chimpanzees. Our initial cross-species social responsiveness scale proved reliable and discriminated differences in social responsiveness across (in a relative sense) and within (in a more objectively quantifiable manner) human beings and chimpanzees. Copyright © 2011 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  7. [Performance and safety at work].

    PubMed

    Bentivegna, M

    2010-01-01

    The evaluative approach of occupational therapy, centred on the person, on an analysis of performance and an assessment of the work environment, can provide important information for planning interventions to increase safety at work. The reliability of work performance is influenced by many factors, some of which are not directly dependent on humans, such as those related to the environment, to materials, to spaces, to places and to the organization of work; others, however, are closely related to human behaviours. For this reason, for the purpose of ensuring prevention of all harmful events, the process of risk evaluation must also include an analysis of the role of human behaviour and functional capacity. In our daily clinical practice, we Occupational Therapists use work to promote the wellbeing and health of people by involving them in activities, with the knowledge that every occupation is perceived by an individual as something particularly personal and significant.

  8. Health Belief Model Scale for Human Papilloma Virus and its Vaccination: Adaptation and Psychometric Testing.

    PubMed

    Guvenc, Gulten; Seven, Memnun; Akyuz, Aygul

    2016-06-01

    To adapt and psychometrically test the Health Belief Model Scale for Human Papilloma Virus (HPV) and Its Vaccination (HBMS-HPVV) for use in a Turkish population and to assess the Human Papilloma Virus Knowledge score (HPV-KS) among female college students. Instrument adaptation and psychometric testing study. The sample consisted of 302 nursing students at a nursing school in Turkey between April and May 2013. Questionnaire-based data were collected from the participants. Information regarding HBMS-HPVV and HPV knowledge and descriptive characteristic of participants was collected using translated HBMS-HPVV and HPV-KS. Test-retest reliability was evaluated and Cronbach α was used to assess internal consistency reliability, and exploratory factor analysis was used to assess construct validity of the HBMS-HPVV. The scale consists of 4 subscales that measure 4 constructs of the Health Belief Model covering the perceived susceptibility and severity of HPV and the benefits and barriers. The final 14-item scale had satisfactory validity and internal consistency. Cronbach α values for the 4 subscales ranged from 0.71 to 0.78. Total HPV-KS ranged from 0 to 8 (scale range, 0-10; 3.80 ± 2.12). The HBMS-HPVV is a valid and reliable instrument for measuring young Turkish women's beliefs and attitudes about HPV and its vaccination. Copyright © 2015 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  9. Multisensory decisions provide support for probabilistic number representations.

    PubMed

    Kanitscheider, Ingmar; Brown, Amanda; Pouget, Alexandre; Churchland, Anne K

    2015-06-01

    A large body of evidence suggests that an approximate number sense allows humans to estimate numerosity in sensory scenes. This ability is widely observed in humans, including those without formal mathematical training. Despite this, many outstanding questions remain about the nature of the numerosity representation in the brain. Specifically, it is not known whether approximate numbers are represented as scalar estimates of numerosity or, alternatively, as probability distributions over numerosity. In the present study, we used a multisensory decision task to distinguish these possibilities. We trained human subjects to decide whether a test stimulus had a larger or smaller numerosity compared with a fixed reference. Depending on the trial, the numerosity was presented as either a sequence of visual flashes or a sequence of auditory tones, or both. To test for a probabilistic representation, we varied the reliability of the stimulus by adding noise to the visual stimuli. In accordance with a probabilistic representation, we observed a significant improvement in multisensory compared with unisensory trials. Furthermore, a trial-by-trial analysis revealed that although individual subjects showed strategic differences in how they leveraged auditory and visual information, all subjects exploited the reliability of unisensory cues. An alternative, nonprobabilistic model, in which subjects combined cues without regard for reliability, was not able to account for these trial-by-trial choices. These findings provide evidence that the brain relies on a probabilistic representation for numerosity decisions. Copyright © 2015 the American Physiological Society.

  10. An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents.

    PubMed

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; de Visser, Ewart; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2017-10-01

    As society becomes more reliant on machines and automation, understanding how people utilize advice is a necessary endeavor. Our objective was to reveal the underlying neural associations during advice utilization from expert human and machine agents with fMRI and multivariate Granger causality analysis. During an X-ray luggage-screening task, participants accepted or rejected good or bad advice from either the human or machine agent framed as experts with manipulated reliability (high miss rate). We showed that the machine-agent group decreased their advice utilization compared to the human-agent group and these differences in behaviors during advice utilization could be accounted for by high expectations of reliable advice and changes in attention allocation due to miss errors. Brain areas involved with the salience and mentalizing networks, as well as sensory processing involved with attention, were recruited during the task and the advice utilization network consisted of attentional modulation of sensory information with the lingual gyrus as the driver during the decision phase and the fusiform gyrus as the driver during the feedback phase. Our findings expand on the existing literature by showing that misses degrade advice utilization, which is represented in a neural network involving salience detection and self-processing with perceptual integration.

  11. Calbindin-D28k is a more reliable marker of human Purkinje cells than standard Nissl stains: a stereological experiment.

    PubMed

    Whitney, Elizabeth R; Kemper, Thomas L; Rosene, Douglas L; Bauman, Margaret L; Blatt, Gene J

    2008-02-15

    In a study of human Purkinje cell (PC) number, a striking mismatch between the number of PCs observed with the Nissl stain and the number of PCs immunopositive for calbindin-D28k (CB) was identified in 2 of the 10 brains examined. In the remaining eight brains this mismatch was not observed. Further, in these eight brains, analysis of CB immunostained sections counterstained with the Nissl stain revealed that more than 99% Nissl stained PCs were also immunopositive for CB. In contrast, in the two discordant brains, only 10-20% of CB immunopositive PCs were also identified with the Nissl stain. Although this finding was unexpected, a historical survey of the literature revealed that Spielmeyer [Spielmeyer W. Histopathologie des nervensystems. Julius Springer: Berlin; 1922. p. 56-79] described human cases with PCs that lacked the expected Nissl staining intensity, an important historical finding and critical issue when studying postmortem human brains. The reason for this failure in Nissl staining is not entirely clear, but it may result from premortem circumstances since it is not accounted for by postmortem delay or processing variables. Regardless of the exact cause, these observations suggest that Nissl staining may not be a reliable marker for PCs and that CB is an excellent alternative marker.

  12. SPAR-H Step-by-Step Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. J. Galyean; A. M. Whaley; D. L. Kelly

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less

  13. Personality and subjective well-being in orangutans (Pongo pygmaeus and Pongo abelii).

    PubMed

    Weiss, Alexander; King, James E; Perkins, Lori

    2006-03-01

    Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor. Copyright (c) 2006 APA, all rights reserved.

  14. One Size Does Not Fit All: Human Failure Event Decomposition and Task Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Laurids Boring, PhD

    2014-09-01

    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered or exacerbated by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally,more » both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down—defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications. In this paper, I first review top-down and bottom-up approaches for defining HFEs and then present a seven-step guideline to ensure a task analysis completed as part of human error identification decomposes to a level suitable for use as HFEs. This guideline illustrates an effective way to bridge the bottom-up approach with top-down requirements.« less

  15. Best Practices for Reliable and Robust Spacecraft Structures

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Murthy, P. L. N.; Patel, Naresh R.; Bonacuse, Peter J.; Elliott, Kenny B.; Gordon, S. A.; Gyekenyesi, J. P.; Daso, E. O.; Aggarwal, P.; Tillman, R. F.

    2007-01-01

    A study was undertaken to capture the best practices for the development of reliable and robust spacecraft structures for NASA s next generation cargo and crewed launch vehicles. In this study, the NASA heritage programs such as Mercury, Gemini, Apollo, and the Space Shuttle program were examined. A series of lessons learned during the NASA and DoD heritage programs are captured. The processes that "make the right structural system" are examined along with the processes to "make the structural system right". The impact of technology advancements in materials and analysis and testing methods on reliability and robustness of spacecraft structures is studied. The best practices and lessons learned are extracted from these studies. Since the first human space flight, the best practices for reliable and robust spacecraft structures appear to be well established, understood, and articulated by each generation of designers and engineers. However, these best practices apparently have not always been followed. When the best practices are ignored or short cuts are taken, risks accumulate, and reliability suffers. Thus program managers need to be vigilant of circumstances and situations that tend to violate best practices. Adherence to the best practices may help develop spacecraft systems with high reliability and robustness against certain anomalies and unforeseen events.

  16. Parts Quality Management: Direct Part Marking via Data Matrix Symbols for Mission Assurance

    NASA Technical Reports Server (NTRS)

    Moss, Chantrice

    2013-01-01

    A United States Government Accountability Office (GAO) review of twelve NASA programs found widespread parts quality problems contributing to significant cost overruns, schedule delays, and reduced system reliability. Direct part-marking with Data Matrix symbols could significantly improve the quality of inventory control and parts lifecycle management. This paper examines the feasibility of using 15 marking technologies for use in future NASA programs. A structural analysis is based on marked material type, operational environment (e.g., ground, suborbital, orbital), durability of marks, ease of operation, reliability, and affordability. A cost-benefits analysis considers marking technology (data plates, label printing, direct part marking) and marking types (two-dimensional machine-readable, human-readable). Previous NASA parts marking efforts and historical cost data are accounted for, including in-house vs. outsourced marking. Some marking methods are still under development. While this paper focuses on NASA programs, results may be applicable to a variety of industrial environments.

  17. Surgical swab counting: a qualitative analysis from the perspective of the scrub nurse.

    PubMed

    D'Lima, D; Sacks, M; Blackman, W; Benn, J

    2014-05-01

    The aim of the study was to conduct a qualitative exploration of the sociotechnical processes underlying retained surgical swabs, and to explore the fundamental reasons why the swab count procedure and related protocols fail in practice. Data was collected through a set of 27 semistructured qualitative interviews with scrub nurses from a large, multi-site teaching hospital. Interview transcripts were analysed using established constant comparative methods, moving between inductive and deductive reasoning. Key findings were associated with interprofessional perspectives, team processes and climate and responsibility for the swab count. The analysis of risk factors revealed that perceived social and interprofessional issues played a significant role in the reliability of measures to prevent retained swabs. This work highlights the human, psychological and organisational factors that impact upon the reliability of the process and gives rise to recommendations to address contextual factors and improve perioperative practice and training.

  18. Analysis of human factors effects on the safety of transporting radioactive waste materials: Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abkowitz, M.D.; Abkowitz, S.B.; Lepofsky, M.

    1989-04-01

    This report examines the extent of human factors effects on the safety of transporting radioactive waste materials. It is seen principally as a scoping effort, to establish whether there is a need for DOE to undertake a more formal approach to studying human factors in radioactive waste transport, and if so, logical directions for that program to follow. Human factors effects are evaluated on driving and loading/transfer operations only. Particular emphasis is placed on the driving function, examining the relationship between human error and safety as it relates to the impairment of driver performance. Although multi-modal in focus, the widespreadmore » availability of data and previous literature on truck operations resulted in a primary study focus on the trucking mode from the standpoint of policy development. In addition to the analysis of human factors accident statistics, the report provides relevant background material on several policies that have been instituted or are under consideration, directed at improving human reliability in the transport sector. On the basis of reported findings, preliminary policy areas are identified. 71 refs., 26 figs., 5 tabs.« less

  19. Space Station man-machine automation trade-off analysis

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Bard, J.; Feinberg, A.

    1985-01-01

    The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.

  20. A LEAN approach toward automated analysis and data processing of polymers using proton NMR spectroscopy.

    PubMed

    de Brouwer, Hans; Stegeman, Gerrit

    2011-02-01

    To maximize utilization of expensive laboratory instruments and to make most effective use of skilled human resources, the entire chain of data processing, calculation, and reporting that is needed to transform raw NMR data into meaningful results was automated. The LEAN process improvement tools were used to identify non-value-added steps in the existing process. These steps were eliminated using an in-house developed software package, which allowed us to meet the key requirement of improving quality and reliability compared with the existing process while freeing up valuable human resources and increasing productivity. Reliability and quality were improved by the consistent data treatment as performed by the software and the uniform administration of results. Automating a single NMR spectrophotometer led to a reduction in operator time of 35%, doubling of the annual sample throughput from 1400 to 2800, and reducing the turn around time from 6 days to less than 2. Copyright © 2011 Society for Laboratory Automation and Screening. Published by Elsevier Inc. All rights reserved.

  1. APPRIS: annotation of principal and alternative splice isoforms

    PubMed Central

    Rodriguez, Jose Manuel; Maietta, Paolo; Ezkurdia, Iakes; Pietrelli, Alessandro; Wesselink, Jan-Jaap; Lopez, Gonzalo; Valencia, Alfonso; Tress, Michael L.

    2013-01-01

    Here, we present APPRIS (http://appris.bioinfo.cnio.es), a database that houses annotations of human splice isoforms. APPRIS has been designed to provide value to manual annotations of the human genome by adding reliable protein structural and functional data and information from cross-species conservation. The visual representation of the annotations provided by APPRIS for each gene allows annotators and researchers alike to easily identify functional changes brought about by splicing events. In addition to collecting, integrating and analyzing reliable predictions of the effect of splicing events, APPRIS also selects a single reference sequence for each gene, here termed the principal isoform, based on the annotations of structure, function and conservation for each transcript. APPRIS identifies a principal isoform for 85% of the protein-coding genes in the GENCODE 7 release for ENSEMBL. Analysis of the APPRIS data shows that at least 70% of the alternative (non-principal) variants would lose important functional or structural information relative to the principal isoform. PMID:23161672

  2. Human Support Issues and Systems for the Space Exploration Initiative: Results from Project Outreach

    DTIC Science & Technology

    1991-01-01

    that human factors were responsible for mission failure more often than equipment factors. Spacecraft habitability and ergonomics also require more...substantial challenges for designing reliable, flexible joints and dexterous, reliable gloves. Submission #100701 dealt with the ergonomics of work...perception that human factors deals primarily with cockpit displays and ergonomics . The success of long-duration missions will be highly dependent on

  3. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  4. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  5. Evaluation of Human Reliability in Selected Activities in the Railway Industry

    NASA Astrophysics Data System (ADS)

    Sujová, Erika; Čierna, Helena; Molenda, Michał

    2016-09-01

    The article focuses on evaluation of human reliability in the human - machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher`s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors` work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher's risk factors that affect his/her work and the evaluated seriousness of its influence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of unclear and complicated internal regulations and work processes, a feeling of being overworked, fear for one's safety at small, insufficiently protected stations.

  6. [MaRS Project

    NASA Technical Reports Server (NTRS)

    Aruljothi, Arunvenkatesh

    2016-01-01

    The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.

  7. 10 CFR 712.15 - Management evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...

  8. 10 CFR 712.15 - Management evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...

  9. 10 CFR 712.15 - Management evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...

  10. 10 CFR 712.15 - Management evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...

  11. 10 CFR 712.18 - Transferring HRP certification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Transferring HRP certification. 712.18 Section 712.18 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.18 Transferring HRP certification. (a) For HRP certification to be...

  12. 10 CFR 712.2 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Applicability. 712.2 Section 712.2 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program General Provisions § 712.2 Applicability. The HRP applies to all applicants for, or current employees of...

  13. 10 CFR 712.22 - Hearing officer's report and recommendation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Hearing officer's report and recommendation. 712.22 Section 712.22 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.22 Hearing officer's report and recommendation. Within...

  14. 10 CFR 712.16 - DOE security review.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false DOE security review. 712.16 Section 712.16 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... part. (c) Any mental/personality disorder or behavioral issues found in a personnel security file...

  15. 10 CFR 712.10 - Designation of HRP positions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2012-01-01 2012-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...

  16. 10 CFR 712.10 - Designation of HRP positions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2013-01-01 2013-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...

  17. 10 CFR 712.10 - Designation of HRP positions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2010-01-01 2010-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...

  18. 10 CFR 712.10 - Designation of HRP positions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2011-01-01 2011-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...

  19. 10 CFR 712.10 - Designation of HRP positions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2014-01-01 2014-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...

  20. 10 CFR 712.17 - Instructional requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...

  1. 10 CFR 712.17 - Instructional requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...

  2. 10 CFR 712.17 - Instructional requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...

  3. 10 CFR 712.17 - Instructional requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...

  4. HSI top-down requirements analysis for ship manpower reduction

    NASA Astrophysics Data System (ADS)

    Malone, Thomas B.; Bost, J. R.

    2000-11-01

    U.S. Navy ship acquisition programs such as DD 21 and CVNX are increasingly relying on top down requirements analysis (TDRA) to define and assess design approaches for workload and manpower reduction, and for ensuring required levels of human performance, reliability, safety, and quality of life at sea. The human systems integration (HSI) approach to TDRA begins with a function analysis which identifies the functions derived from the requirements in the Operational Requirements Document (ORD). The function analysis serves as the function baseline for the ship, and also supports the definition of RDT&E and Total Ownership Cost requirements. A mission analysis is then conducted to identify mission scenarios, again based on requirements in the ORD, and the Design Reference Mission (DRM). This is followed by a mission/function analysis which establishes the function requirements to successfully perform the ship's missions. Function requirements of major importance for HSI are information, performance, decision, and support requirements associated with each function. An allocation of functions defines the roles of humans and automation in performing the functions associated with a mission. Alternate design concepts, based on function allocation strategies, are then described, and task networks associated with the concepts are developed. Task network simulations are conducted to assess workloads and human performance capabilities associated with alternate concepts. An assessment of the affordability and risk associated with alternate concepts is performed, and manning estimates are developed for feasible design concepts.

  5. STAMP-Based HRA Considering Causality Within a Sociotechnical System: A Case of Minuteman III Missile Accident.

    PubMed

    Rong, Hao; Tian, Jin

    2015-05-01

    The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.

  6. Feasibility of surveillance of changes in human fertility and semen quality.

    PubMed

    Stewart, T M; Brown, E H; Venn, A; Mbizvo, M T; Farley, T M; Garrett, C; Baker, H W

    2001-01-01

    There is concern that male fertility is declining, but this is difficult to study because few men volunteer for studies of semen quality, and recruitment bias may over-represent the subfertile. The Human Reproduction Programme of the World Health Organization developed a protocol for multicentre studies of fertility involving a questionnaire for pregnant women to obtain time to pregnancy (TTP): the number of menstrual cycles taken to conceive. Male characteristics and semen quality will be determined in a subset of the partners. Our aim was to validate the TTP questionnaire, and to examine potential recruitment bias and feasibility of conducting large-scale surveillance of fertility. The questionnaire was administered to 120 pregnant women (16-32 weeks). Validation included internal reliability by consistency of responses, test-re-test reliability by repeat administration (20 women) and accuracy by comparison of gestational age from first antenatal ultrasound and menstrual dates. Internal reliability was high. Agreement between categorical responses on re-testing was very good (k > 0.8). In both the re-test and gestational age analysis, differences in TTP of 1 cycle were found (standard deviation <0.25 cycles). In this small pilot study there was no evidence of recruitment bias. Response rates indicate the feasibility of surveillance of fertility in large maternity centres.

  7. Ultrasound measurement of transcranial distance during head-down tilt

    NASA Technical Reports Server (NTRS)

    Torikoshi, S.; Wilson, M. H.; Ballard, R. E.; Watenpaugh, D. E.; Murthy, G.; Yost, W. T.; Cantrell, J. H.; Chang, D. S.; Hargens, A. R.

    1995-01-01

    Exposure to microgravity elevates blood pressure and flow in the head, which may increase intracranial volume (ICV) and intracranial pressure (ICP). Rhesus monkeys exposed to simulated microgravity in the form of 6 degree head-down tilt (HDT) experience elevated ICP. With humans, twenty-four hours of 6 degree HDT bed rest increases cerebral blood flow velocity relative to pre-HDT upright posture. Humans exposed to acute 6 degree HDT experiments increased ICP, measured with the tympanic membrane displacement (TMD) technique. Other studies suggest that increased ICP in humans and cats causes measurable cranial bone movement across the sagittal suture. Due to the slightly compliant nature of the cranium, elevation of the ICP will increase ICV and transcranial distance. Currently, several non-invasive approaches to monitor ICP are being investigated. Such techniques include TMD and modal analysis of the skull. TMD may not be reliable over a large range of ICP and neither method is capable of measuring the small changes in pressure. Ultrasound, however, may reliably measure small distance changes that accompany ICP fluctuations. The purpose of our study was to develop and evaluate an ultrasound technique to measure transcranial distance changes during HDT.

  8. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  9. A reliability analysis tool for SpaceWire network

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  10. Accuracy of epidemiological inferences based on publicly available information: retrospective comparative analysis of line lists of human cases infected with influenza A(H7N9) in China

    PubMed Central

    2014-01-01

    Background Appropriate public health responses to infectious disease threats should be based on best-available evidence, which requires timely reliable data for appropriate analysis. During the early stages of epidemics, analysis of ‘line lists’ with detailed information on laboratory-confirmed cases can provide important insights into the epidemiology of a specific disease. The objective of the present study was to investigate the extent to which reliable epidemiologic inferences could be made from publicly-available epidemiologic data of human infection with influenza A(H7N9) virus. Methods We collated and compared six different line lists of laboratory-confirmed human cases of influenza A(H7N9) virus infection in the 2013 outbreak in China, including the official line list constructed by the Chinese Center for Disease Control and Prevention plus five other line lists by HealthMap, Virginia Tech, Bloomberg News, the University of Hong Kong and FluTrackers, based on publicly-available information. We characterized clinical severity and transmissibility of the outbreak, using line lists available at specific dates to estimate epidemiologic parameters, to replicate real-time inferences on the hospitalization fatality risk, and the impact of live poultry market closure. Results Demographic information was mostly complete (less than 10% missing for all variables) in different line lists, but there were more missing data on dates of hospitalization, discharge and health status (more than 10% missing for each variable). The estimated onset to hospitalization distributions were similar (median ranged from 4.6 to 5.6 days) for all line lists. Hospital fatality risk was consistently around 20% in the early phase of the epidemic for all line lists and approached the final estimate of 35% afterwards for the official line list only. Most of the line lists estimated >90% reduction in incidence rates after live poultry market closures in Shanghai, Nanjing and Hangzhou. Conclusions We demonstrated that analysis of publicly-available data on H7N9 permitted reliable assessment of transmissibility and geographical dispersion, while assessment of clinical severity was less straightforward. Our results highlight the potential value in constructing a minimum dataset with standardized format and definition, and regular updates of patient status. Such an approach could be particularly useful for diseases that spread across multiple countries. PMID:24885692

  11. Accuracy of epidemiological inferences based on publicly available information: retrospective comparative analysis of line lists of human cases infected with influenza A(H7N9) in China.

    PubMed

    Lau, Eric H Y; Zheng, Jiandong; Tsang, Tim K; Liao, Qiaohong; Lewis, Bryan; Brownstein, John S; Sanders, Sharon; Wong, Jessica Y; Mekaru, Sumiko R; Rivers, Caitlin; Wu, Peng; Jiang, Hui; Li, Yu; Yu, Jianxing; Zhang, Qian; Chang, Zhaorui; Liu, Fengfeng; Peng, Zhibin; Leung, Gabriel M; Feng, Luzhao; Cowling, Benjamin J; Yu, Hongjie

    2014-05-28

    Appropriate public health responses to infectious disease threats should be based on best-available evidence, which requires timely reliable data for appropriate analysis. During the early stages of epidemics, analysis of 'line lists' with detailed information on laboratory-confirmed cases can provide important insights into the epidemiology of a specific disease. The objective of the present study was to investigate the extent to which reliable epidemiologic inferences could be made from publicly-available epidemiologic data of human infection with influenza A(H7N9) virus. We collated and compared six different line lists of laboratory-confirmed human cases of influenza A(H7N9) virus infection in the 2013 outbreak in China, including the official line list constructed by the Chinese Center for Disease Control and Prevention plus five other line lists by HealthMap, Virginia Tech, Bloomberg News, the University of Hong Kong and FluTrackers, based on publicly-available information. We characterized clinical severity and transmissibility of the outbreak, using line lists available at specific dates to estimate epidemiologic parameters, to replicate real-time inferences on the hospitalization fatality risk, and the impact of live poultry market closure. Demographic information was mostly complete (less than 10% missing for all variables) in different line lists, but there were more missing data on dates of hospitalization, discharge and health status (more than 10% missing for each variable). The estimated onset to hospitalization distributions were similar (median ranged from 4.6 to 5.6 days) for all line lists. Hospital fatality risk was consistently around 20% in the early phase of the epidemic for all line lists and approached the final estimate of 35% afterwards for the official line list only. Most of the line lists estimated >90% reduction in incidence rates after live poultry market closures in Shanghai, Nanjing and Hangzhou. We demonstrated that analysis of publicly-available data on H7N9 permitted reliable assessment of transmissibility and geographical dispersion, while assessment of clinical severity was less straightforward. Our results highlight the potential value in constructing a minimum dataset with standardized format and definition, and regular updates of patient status. Such an approach could be particularly useful for diseases that spread across multiple countries.

  12. Development and testing of the questionnaire CEC-61: Knowledge about cervical cancer in Chilean adolescents.

    PubMed

    Urrutia, María Teresa; Gajardo, Macarena; Padilla, Oslando

    2017-05-22

    Despite a clear association between human papillomavirus and cervical cancer, knowledge in adolescent populations regarding the disease and methods for its detection and prevention is deficient. The aim of this study was to develop and test a new questionnaire concerning knowledge on cervical cancer. An instrument was developed and validated to measure knowledge in 226 Chilean adolescents between April and June 2011. Content validity, construct validity, and reliability analysis of the instrument were performed. The new, validated instrument, called CEC-61 (Conocimientos en Cancer Cérvicouterino-61 items/Knowledge in Cervical Cancer-61 items), contains nine factors and 61 items. The new questionnaire explained 81% of the variance with a reliability of 0.96. The assessment of knowledge with a valid and reliable instrument is the first step in creating interventions for a population and to encourage appropriate preventive behavior. CEC-61 is highly reliable and has a clear factorial structure to evaluate knowledge in nine domains related to cervical cancer disease, cervical cancer risk, papilloma virus infection, the Papanicolaou test, and the papilloma virus vaccine.

  13. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  14. Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling

    PubMed Central

    Price, Mathew; Cardenas, Valerie A.; Fein, George

    2014-01-01

    Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB’s FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I–V, VI–VII, and VIII–X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM approach offers superior reliability and validity. Extensions of CATK to cerebellar hemisphere parcels is envisioned. PMID:25192657

  15. 10 CFR 712.21 - Office of Hearings and Appeals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Office of Hearings and Appeals. 712.21 Section 712.21 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.21 Office of Hearings and Appeals. (a) The certification review hearing...

  16. Applying Failure Modes, Effects, And Criticality Analysis And Human Reliability Analysis Techniques To Improve Safety Design Of Work Process In Singapore Armed Forces

    DTIC Science & Technology

    2016-09-01

    an instituted safety program that utilizes a generic risk assessment method involving the 5-M (Mission, Man, Machine , Medium and Management) factor...the Safety core value is hinged upon three key principles—(1) each soldier has a crucial part to play, by adopting safety as a core value and making...it a way of life in his unit; (2) safety is an integral part of training, operations and mission success, and (3) safety is an individual, team and

  17. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  18. Simulation study of melanoma detection in human skin tissues by laser-generated surface acoustic waves

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Fu, Xing; Dorantes-Gonzalez, Dante J.; Lu, Zimo; Li, Tingting; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2014-07-01

    Air pollution has been correlated to an increasing number of cases of human skin diseases in recent years. However, the investigation of human skin tissues has received only limited attention, to the point that there are not yet satisfactory modern detection technologies to accurately, noninvasively, and rapidly diagnose human skin at epidermis and dermis levels. In order to detect and analyze severe skin diseases such as melanoma, a finite element method (FEM) simulation study of the application of the laser-generated surface acoustic wave (LSAW) technique is developed. A three-layer human skin model is built, where LSAW's are generated and propagated, and their effects in the skin medium with melanoma are analyzed. Frequency domain analysis is used as a main tool to investigate such issues as minimum detectable size of melanoma, filtering spectra from noise and from computational irregularities, as well as on how the FEM model meshing size and computational capabilities influence the accuracy of the results. Based on the aforementioned aspects, the analysis of the signals under the scrutiny of the phase velocity dispersion curve is verified to be a reliable, a sensitive, and a promising approach for detecting and characterizing melanoma in human skin.

  19. Simulation study of melanoma detection in human skin tissues by laser-generated surface acoustic waves.

    PubMed

    Chen, Kun; Fu, Xing; Dorantes-Gonzalez, Dante J; Lu, Zimo; Li, Tingting; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2014-01-01

    Air pollution has been correlated to an increasing number of cases of human skin diseases in recent years. However, the investigation of human skin tissues has received only limited attention, to the point that there are not yet satisfactory modern detection technologies to accurately, noninvasively, and rapidly diagnose human skin at epidermis and dermis levels. In order to detect and analyze severe skin diseases such as melanoma, a finite element method (FEM) simulation study of the application of the laser-generated surface acoustic wave (LSAW) technique is developed. A three-layer human skin model is built, where LSAW’s are generated and propagated, and their effects in the skin medium with melanoma are analyzed. Frequency domain analysis is used as a main tool to investigate such issues as minimum detectable size of melanoma, filtering spectra from noise and from computational irregularities, as well as on how the FEM model meshing size and computational capabilities influence the accuracy of the results. Based on the aforementioned aspects, the analysis of the signals under the scrutiny of the phase velocity dispersion curve is verified to be a reliable, a sensitive, and a promising approach for detecting and characterizing melanoma in human skin.

  20. Effect of steady magnetic field on human lymphocytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mileva, M.; Ivanov, B.; Bulanova, M.

    1983-01-01

    Exposure to steady magnetic field (SMF) for different periods of time did not elicit statistically reliable increase in chromosome aberrations in human peripheral blood lymphocytes. Metaphase analysis of Crepis capilaris cells revealed that SMF (9 k0e, 200 0e/cm) for 2 days did not induce chromosome aberrations. Nor were any changes demonstrated in roots of beans, onions and L-fibroblasts of subcutaneous tissue of mice and Chinese hamsters. The obtained data are indicative of absence of cytogenetic effect of SMF. The level and spectrum of chromosome aberrations did not exceed the values for spontaneous chromatic fragments in cultures. Cytogenetic analysis of DEDEmore » cells of the Chinese hamster revealed a mild mutagenic effect of SMF. Chromosomal aberrations were also demonstrated after exposure (5 min) of garlic roots.« less

  1. Liquid chromatography tandem mass spectrometry method for the quantitative analysis of ceritinib in human plasma and its application to pharmacokinetic studies.

    PubMed

    Heudi, Olivier; Vogel, Denise; Lau, Yvonne Y; Picard, Franck; Kretz, Olivier

    2014-11-01

    Ceritinib is a highly selective inhibitor of an important cancer target, anaplastic lymphoma kinase (ALK). Because it is an investigational compound, there is a need to develop a robust and reliable analytical method for its quantitative determination in human plasma. Here, we report the validation of a liquid chromatography tandem mass spectrometry (LC-MS/MS) method for the rapid quantification of ceritinib in human plasma. The method consists of protein precipitation with acetonitrile, and salting-out assisted liquid-liquid extraction (SALLE) using a saturated solution of sodium chloride prior to analysis by LC-MS/MS with electrospray ionization (ESI) technique in positive mode. Samples were eluted at 0.800 mL min(-1) on Ascentis Express® C18 column (50 mm × 2.1 mm, 2.7 μm) with a mobile phase made of 0.1 % formic acid in water (A) and 0.1 % formic acid in acetonitrile (B). The method run time was 3.6 min and the low limit of quantification (LLOQ) was estimated at 1.00 ng mL(-1) when using 0.100 mL of human plasma. The assay was fully validated and the method exhibited sufficient specificity, accuracy, precision, and sensitivity. In addition, recovery data and matrix factor (MF) in normal and in hemolyzed plasmas were assessed, while incurred samples stability (ISS) for ceritinib was demonstrated for at least 21 months at a storage temperature of -65 °C or below. The method was successfully applied to the measurement of ceritinib in clinical samples and the data obtained on incurred samples reanalysis (ISR) showed that our method was reliable and suitable to support the analysis of samples from the clinical studies.

  2. Review: domestic animal forensic genetics - biological evidence, genetic markers, analytical approaches and challenges.

    PubMed

    Kanthaswamy, S

    2015-10-01

    This review highlights the importance of domestic animal genetic evidence sources, genetic testing, markers and analytical approaches as well as the challenges this field is facing in view of the de facto 'gold standard' human DNA identification. Because of the genetic similarity between humans and domestic animals, genetic analysis of domestic animal hair, saliva, urine, blood and other biological material has generated vital investigative leads that have been admitted into a variety of court proceedings, including criminal and civil litigation. Information on validated short tandem repeat, single nucleotide polymorphism and mitochondrial DNA markers and public access to genetic databases for forensic DNA analysis is becoming readily available. Although the fundamental aspects of animal forensic genetic testing may be reliable and acceptable, animal forensic testing still lacks the standardized testing protocols that human genetic profiling requires, probably because of the absence of monetary support from government agencies and the difficulty in promoting cooperation among competing laboratories. Moreover, there is a lack in consensus about how to best present the results and expert opinion to comply with court standards and bear judicial scrutiny. This has been the single most persistent challenge ever since the earliest use of domestic animal forensic genetic testing in a criminal case in the mid-1990s. Crime laboratory accreditation ensures that genetic test results have the courts' confidence. Because accreditation requires significant commitments of effort, time and resources, the vast majority of animal forensic genetic laboratories are not accredited nor are their analysts certified forensic examiners. The relevance of domestic animal forensic genetics in the criminal justice system is undeniable. However, further improvements are needed in a wide range of supporting resources, including standardized quality assurance and control protocols for sample handling, evidence testing, statistical analysis and reporting that meet the rules of scientific acceptance, reliability and human forensic identification standards. © 2015 Stichting International Foundation for Animal Genetics.

  3. Bona fide colour: DNA prediction of human eye and hair colour from ancient and contemporary skeletal remains

    PubMed Central

    2013-01-01

    Background DNA analysis of ancient skeletal remains is invaluable in evolutionary biology for exploring the history of species, including humans. Contemporary human bones and teeth, however, are relevant in forensic DNA analyses that deal with the identification of perpetrators, missing persons, disaster victims or family relationships. They may also provide useful information towards unravelling controversies that surround famous historical individuals. Retrieving information about a deceased person’s externally visible characteristics can be informative in both types of DNA analyses. Recently, we demonstrated that human eye and hair colour can be reliably predicted from DNA using the HIrisPlex system. Here we test the feasibility of the novel HIrisPlex system at establishing eye and hair colour of deceased individuals from skeletal remains of various post-mortem time ranges and storage conditions. Methods Twenty-one teeth between 1 and approximately 800 years of age and 5 contemporary bones were subjected to DNA extraction using standard organic protocol followed by analysis using the HIrisPlex system. Results Twenty-three out of 26 bone DNA extracts yielded the full 24 SNP HIrisPlex profile, therefore successfully allowing model-based eye and hair colour prediction. HIrisPlex analysis of a tooth from the Polish general Władysław Sikorski (1881 to 1943) revealed blue eye colour and blond hair colour, which was positively verified from reliable documentation. The partial profiles collected in the remaining three cases (two contemporary samples and a 14th century sample) were sufficient for eye colour prediction. Conclusions Overall, we demonstrate that the HIrisPlex system is suitable, sufficiently sensitive and robust to successfully predict eye and hair colour from ancient and contemporary skeletal remains. Our findings, therefore, highlight the HIrisPlex system as a promising tool in future routine forensic casework involving skeletal remains, including ancient DNA studies, for the prediction of eye and hair colour of deceased individuals. PMID:23317428

  4. Bona fide colour: DNA prediction of human eye and hair colour from ancient and contemporary skeletal remains.

    PubMed

    Draus-Barini, Jolanta; Walsh, Susan; Pośpiech, Ewelina; Kupiec, Tomasz; Głąb, Henryk; Branicki, Wojciech; Kayser, Manfred

    2013-01-14

    DNA analysis of ancient skeletal remains is invaluable in evolutionary biology for exploring the history of species, including humans. Contemporary human bones and teeth, however, are relevant in forensic DNA analyses that deal with the identification of perpetrators, missing persons, disaster victims or family relationships. They may also provide useful information towards unravelling controversies that surround famous historical individuals. Retrieving information about a deceased person's externally visible characteristics can be informative in both types of DNA analyses. Recently, we demonstrated that human eye and hair colour can be reliably predicted from DNA using the HIrisPlex system. Here we test the feasibility of the novel HIrisPlex system at establishing eye and hair colour of deceased individuals from skeletal remains of various post-mortem time ranges and storage conditions. Twenty-one teeth between 1 and approximately 800 years of age and 5 contemporary bones were subjected to DNA extraction using standard organic protocol followed by analysis using the HIrisPlex system. Twenty-three out of 26 bone DNA extracts yielded the full 24 SNP HIrisPlex profile, therefore successfully allowing model-based eye and hair colour prediction. HIrisPlex analysis of a tooth from the Polish general Władysław Sikorski (1881 to 1943) revealed blue eye colour and blond hair colour, which was positively verified from reliable documentation. The partial profiles collected in the remaining three cases (two contemporary samples and a 14th century sample) were sufficient for eye colour prediction. Overall, we demonstrate that the HIrisPlex system is suitable, sufficiently sensitive and robust to successfully predict eye and hair colour from ancient and contemporary skeletal remains. Our findings, therefore, highlight the HIrisPlex system as a promising tool in future routine forensic casework involving skeletal remains, including ancient DNA studies, for the prediction of eye and hair colour of deceased individuals.

  5. Lunar Regenerative Fuel Cell (RFC) Reliability Testing for Assured Mission Success

    NASA Technical Reports Server (NTRS)

    Bents, David J.

    2009-01-01

    NASA's Constellation program has selected the closed cycle hydrogen oxygen Polymer Electrolyte Membrane (PEM) Regenerative Fuel Cell (RFC) as its baseline solar energy storage system for the lunar outpost and manned rover vehicles. Since the outpost and manned rovers are "human-rated," these energy storage systems will have to be of proven reliability exceeding 99 percent over the length of the mission. Because of the low (TRL=5) development state of the closed cycle hydrogen oxygen PEM RFC at present, and because there is no equivalent technology base in the commercial sector from which to draw or infer reliability information from, NASA will have to spend significant resources developing this technology from TRL 5 to TRL 9, and will have to embark upon an ambitious reliability development program to make this technology ready for a manned mission. Because NASA would be the first user of this new technology, NASA will likely have to bear all the costs associated with its development.When well-known reliability estimation techniques are applied to the hydrogen oxygen RFC to determine the amount of testing that will be required to assure RFC unit reliability over life of the mission, the analysis indicates the reliability testing phase by itself will take at least 2 yr, and could take up to 6 yr depending on the number of QA units that are built and tested and the individual unit reliability that is desired. The cost and schedule impacts of reliability development need to be considered in NASA's Exploration Technology Development Program (ETDP) plans, since life cycle testing to build meaningful reliability data is the only way to assure "return to the moon, this time to stay, then on to Mars" mission success.

  6. EPRI/NRC-RES fire human reliability analysis guidelines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Stuart R.; Cooper, Susan E.; Najafi, Bijan

    2010-03-01

    During the 1990s, the Electric Power Research Institute (EPRI) developed methods for fire risk analysis to support its utility members in the preparation of responses to Generic Letter 88-20, Supplement 4, 'Individual Plant Examination - External Events' (IPEEE). This effort produced a Fire Risk Assessment methodology for operations at power that was used by the majority of U.S. nuclear power plants (NPPs) in support of the IPEEE program and several NPPs overseas. Although these methods were acceptable for accomplishing the objectives of the IPEEE, EPRI and the U.S. Nuclear Regulatory Commission (NRC) recognized that they required upgrades to support currentmore » requirements for risk-informed, performance-based (RI/PB) applications. In 2001, EPRI and the USNRC's Office of Nuclear Regulatory Research (RES) embarked on a cooperative project to improve the state-of-the-art in fire risk assessment to support a new risk-informed environment in fire protection. This project produced a consensus document, NUREG/CR-6850 (EPRI 1011989), entitled 'Fire PRA Methodology for Nuclear Power Facilities' which addressed fire risk for at power operations. NUREG/CR-6850 developed high level guidance on the process for identification and inclusion of human failure events (HFEs) into the fire PRA (FPRA), and a methodology for assigning quantitative screening values to these HFEs. It outlined the initial considerations of performance shaping factors (PSFs) and related fire effects that may need to be addressed in developing best-estimate human error probabilities (HEPs). However, NUREG/CR-6850 did not describe a methodology to develop best-estimate HEPs given the PSFs and the fire-related effects. In 2007, EPRI and RES embarked on another cooperative project to develop explicit guidance for estimating HEPs for human failure events under fire generated conditions, building upon existing human reliability analysis (HRA) methods. This document provides a methodology and guidance for conducting a fire HRA. This process includes identification and definition of post-fire human failure events, qualitative analysis, quantification, recovery, dependency, and uncertainty. This document provides three approaches to quantification: screening, scoping, and detailed HRA. Screening is based on the guidance in NUREG/CR-6850, with some additional guidance for scenarios with long time windows. Scoping is a new approach to quantification developed specifically to support the iterative nature of fire PRA quantification. Scoping is intended to provide less conservative HEPs than screening, but requires fewer resources than a detailed HRA analysis. For detailed HRA quantification, guidance has been developed on how to apply existing methods to assess post-fire fire HEPs.« less

  7. Quantitation of total homocysteine in human plasma by derivatization to its N(O,S)-propoxycarbonyl propyl ester and gas chromatography-mass spectrometry analysis.

    PubMed

    Sass, J O; Endres, W

    1997-08-01

    Much evidence supports the hypothesis that mild or moderate hyperhomocysteinaemia represents an important and independent risk factor for occlusive vascular diseases. Therefore, the accurate and reliable determination of total plasma homocysteine has gained major importance for risk assessment. Furthermore, it can help in the detection of folate and vitamin B12 deficiency. This has prompted us to develop a sensitive gas chromatography-mass spectrometry (GC-MS) method in order to quantify total homocysteine in human plasma. Prior to chromatography, reduced homocysteine was released from disulfide bonds by incubation with excess dithiothreitol and converted into its N(O,S)-propoxycarbonyl propyl ester by derivatization with n-propyl chloroformate. Aminoethylcysteine served as internal standard. The method proved to be highly linear over the entire concentration range examined (corresponding to 0-266 microM homocysteine) and showed intra-assay and inter-assay variation (relative standard deviations) of approximately 5 and 5-10%, respectively. External quality control by comparison with duplicate analysis performed on a HPLC-based system revealed satisfactory correlation. The newly developed GC-MS based method provides simple, reliable and fast quantification of total homocysteine and requires only inexpensive chemicals, which are easy to obtain.

  8. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction

    PubMed Central

    Carleton, W. Christopher; Campbell, David

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329

  9. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction.

    PubMed

    Carleton, W Christopher; Campbell, David; Collard, Mark

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.

  10. Risk-based maintenance of ethylene oxide production facilities.

    PubMed

    Khan, Faisal I; Haddara, Mahmoud R

    2004-05-20

    This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.

  11. A reduced factor structure for the PROQOL-HIV questionnaire provided reliable indicators of health-related quality of life.

    PubMed

    Lalanne, Christophe; Chassany, Olivier; Carrieri, Patrizia; Marcellin, Fabienne; Armstrong, Andrew R; Lert, France; Spire, Bruno; Dray-Spira, Rosemary; Duracinsky, Martin

    2016-04-01

    To identify a simplified factor structure for the PROQOL-human immunodeficiency virus (HIV) questionnaire to improve the measurement of the health-related quality of life (HRQL) of HIV-positive patients in clinical care and research settings. HRQL data were collected using the eight-dimension PROQOL-HIV questionnaire from 2,537 patients (VESPA2 study). Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) validated a simpler four-factor structure and assessed measurement invariance (MI). Multigroup analysis assessed the effect of sex, age, and antiretroviral therapy (ART) on the resulting factor scores. Correlations with symptom and Short Form (SF)-12 self-reports assessed convergent validity. Item analysis, EFA, and CFAs confirmed the validity [comparative fit index (CFI), 0.948; root mean square error of approximation, 0.064] and reliability (α's ≥ 0.8) of four dimensions: physical health and symptoms, health concerns and mental distress, social and intimate relationships, and treatment-related impact. Strong MI was demonstrated across sex and age (decrease in CFI <0.01). A multiple-cause multiple-indicator model indicated that HRQL correlated as expected with sex, age, and the ART status. Correlations of HRQL, symptom reports, and SF-12 scores evidenced convergent validity criterion. The simplified factor structure and scoring scheme for PROQOL-HIV will allow clinicians to monitor with greater reliability the HRQL of patients in clinical care and research settings. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Nuclear reactor safety research since three mile island.

    PubMed

    Mynatt, F R

    1982-04-09

    The Three Mile Island nuclear power plant accident has resulted in redirection of reactor safety research priorities. The small release to the environment of radioactive iodine-13 to 17 curies in a total radioactivity release of 2.4 million to 13 million curies-has led to a new emphasis on the physical chemistry of fission product behavior in accidents; the fact that the nuclear core was severely damaged but did not melt down has opened a new accident regime-that of the degraded core; the role of the operators in the progression and severity of the accident has shifted emphasis from equipment reliability to human reliability. As research progresses in these areas, the technical base for regulation and risk analysis will change substantially.

  13. Applications of Human Performance Reliability Evaluation Concepts and Demonstration Guidelines

    DTIC Science & Technology

    1977-03-15

    ship stops dead in the water and the AN/SQS-26 operator recommends a new heading (000°). At T + 14 minutes, the target ship begins a hard turn to...Various Simulated Conditions 82 9 Hunan Reliability for Each Simulated Operator (Baseline Run) 83 10 Human and Equipment Availabilit / under

  14. Integrative analyses of human reprogramming reveal dynamic nature of induced pluripotency

    PubMed Central

    Cacchiarelli, Davide; Trapnell, Cole; Ziller, Michael J.; Soumillon, Magali; Cesana, Marcella; Karnik, Rahul; Donaghey, Julie; Smith, Zachary D.; Ratanasirintrawoot, Sutheera; Zhang, Xiaolan; Ho Sui, Shannan J.; Wu, Zhaoting; Akopian, Veronika; Gifford, Casey A.; Doench, John; Rinn, John L.; Daley, George Q.; Meissner, Alexander; Lander, Eric S.; Mikkelsen, Tarjei S.

    2015-01-01

    Summary Induced pluripotency is a promising avenue for disease modeling and therapy, but the molecular principles underlying this process, particularly in human cells, remain poorly understood due to donor-to-donor variability and intercellular heterogeneity. Here we constructed and characterized a clonal, inducible human reprogramming system that provides a reliable source of cells at any stage of the process. This system enabled integrative transcriptional and epigenomic analysis across the human reprogramming timeline at high resolution. We observed distinct waves of gene network activation, including the ordered reactivation of broad developmental regulators followed by early embryonic patterning genes and culminating in the emergence of a signature reminiscent of pre-implantation stages. Moreover, complementary functional analyses allowed us to identify and validate novel regulators of the reprogramming process. Altogether, this study sheds light on the molecular underpinnings of induced pluripotency in human cells and provides a robust cell platform for further studies. PMID:26186193

  15. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. The validity, reliability, and utility of the iButton® for measurement of body temperature circadian rhythms in sleep/wake research.

    PubMed

    Hasselberg, Michael J; McMahon, James; Parker, Kathy

    2013-01-01

    Changes in core body temperature due to heat transfer through the skin have a major influence on sleep regulation. Traditional measures of skin temperature are often complicated by extensive wiring and are not practical for use in normal living conditions. This review describes studies examining the reliability, validity and utility of the iButton®, a wireless peripheral thermometry device, in sleep/wake research. A review was conducted of English language literature on the iButton as a measure of circadian body temperature rhythms associated with the sleep/wake cycle. Seven studies of the iButtton as a measure of human body temperature were included. The iButton was found to be a reliable and valid measure of body temperature. Its application to human skin was shown to be comfortable and tolerable with no significant adverse reactions. Distal skin temperatures were negatively correlated with sleep/wake activity, and the temperature gradient between the distal and proximal skin (DPG) was identified as an accurate physiological correlate of sleep propensity. Methodological issues included site of data logger placement, temperature masking factors, and temperature data analysis. The iButton is an inexpensive, wireless data logger that can be used to obtain a valid measurement of human skin temperature. It is a practical alternative to traditional measures of circadian rhythms in sleep/wake research. Further research is needed to determine the utility of the iButton in vulnerable populations, including those with neurodegenerative disorders and memory impairment and pediatric populations. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Reliability Practice at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Pruessner, Paula S.; Li, Ming

    2008-01-01

    This paper describes in brief the Reliability and Maintainability (R&M) Programs performed directly by the reliability branch at Goddard Space Flight Center (GSFC). The mission assurance requirements flow down is explained. GSFC practices for PRA, reliability prediction/fault tree analysis/reliability block diagram, FMEA, part stress and derating analysis, worst case analysis, trend analysis, limit life items are presented. Lessons learned are summarized and recommendations on improvement are identified.

  18. Development of analyses by high-performance liquid chromatography and liquid chromatography/tandem mass spectrometry of bilberry (Vaccinium myrtilus) anthocyanins in human plasma and urine.

    PubMed

    Cooke, Darren N; Thomasset, Sarah; Boocock, David J; Schwarz, Michael; Winterhalter, Peter; Steward, William P; Gescher, Andreas J; Marczylo, Timothy H

    2006-09-20

    Anthocyanins are potent antioxidants that may possess chronic disease preventive properties. Here, rapid, reliable, and reproducible solid-phase extraction, high-performance liquid chromatography (HPLC), and mass spectrometry techniques are described for the isolation, separation, and identification of anthocyanins in human plasma and urine. Recoveries of cyanidin-3-glucoside (C3G) were 91% from water, 71% from plasma, and 81% from urine. Intra- and interday variations for C3G extraction were 9 and 9.1% in plasma and 7.1 and 9.1% in urine and were less than 15% for all anthocyanins from a standardized bilberry extract (mirtoselect). Analysis of mirtoselect by HPLC with UV detection produced spectra with 15 peaks compatible with anthocyanin components found in mirtoselect within a total run time of 15 min. Chromatographic analysis of human urine obtained after an oral dose of mirtoselect yielded 19 anthocyanin peaks. Mass spectrometric analysis employing multiple reaction monitoring suggests the presence of unchanged anthocyanins and anthocyanidin glucuronide metabolites.

  19. Occurance of Staphylococcus nepalensis strains in different sources including human clinical material.

    PubMed

    Nováková, Dana; Pantůcek, Roman; Petrás, Petr; Koukalová, Dagmar; Sedlácek, Ivo

    2006-10-01

    Five isolates of coagulase-negative staphylococci were obtained from human urine, the gastrointestinal tract of squirrel monkeys, pig skin and from the environment. All key biochemical characteristics of the tested strains corresponded with the description of Staphylococcus xylosus species. However, partial 16S rRNA gene sequences obtained from analysed strains corresponded with those of Staphylococcus nepalensis reference strains, except for two strains which differed in one residue. Ribotyping with EcoRI and HindIII restriction enzymes, whole cell protein profile analysis performed by SDS-PAGE and SmaI macrorestriction analysis were used for more precise characterization and identification of the analysed strains. Obtained results showed that EcoRI and HindIII ribotyping and whole cell protein fingerprinting are suitable and reliable methods for the differentiation of S. nepalensis strains from the other novobiocin resistant staphylococci, whereas macrorestriction analysis was found to be a good tool for strain typing. The isolation of S. nepalensis is sporadic, and according to our best knowledge this study is the first report of the occurrence of this species in human clinical material as well as in other sources.

  20. Altair Lander Life Support: Design Analysis Cycles 4 and 5

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Curley, Su; Rotter, Henry; Stambaugh, Imelda; Yagoda, Evan

    2011-01-01

    Life support systems are a critical part of human exploration beyond low earth orbit. NASA s Altair Lunar Lander team is pursuing efficient solutions to the technical challenges of human spaceflight. Life support design efforts up through Design Analysis Cycle (DAC) 4 focused on finding lightweight and reliable solutions for the Sortie and Outpost missions within the Constellation Program. In DAC-4 and later follow on work, changes were made to add functionality for new requirements accepted by the Altair project, and to update the design as knowledge about certain issues or hardware matured. In DAC-5, the Altair project began to consider mission architectures outside the Constellation baseline. Selecting the optimal life support system design is very sensitive to mission duration. When the mission goals and architecture change several trade studies must be conducted to determine the appropriate design. Finally, several areas of work developed through the Altair project may be applicable to other vehicle concepts for microgravity missions. Maturing the Altair life support system related analysis, design, and requirements can provide important information for developers of a wide range of other human vehicles.

  1. Altair Lander Life Support: Design Analysis Cycles 4 and 5

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Curley, Su; Rotter, Henry; Yagoda, Evan

    2010-01-01

    Life support systems are a critical part of human exploration beyond low earth orbit. NASA s Altair Lunar Lander team is pursuing efficient solutions to the technical challenges of human spaceflight. Life support design efforts up through Design Analysis Cycle (DAC) 4 focused on finding lightweight and reliable solutions for the Sortie and Outpost missions within the Constellation Program. In DAC-4 and later follow on work, changes were made to add functionality for new requirements accepted by the Altair project, and to update the design as knowledge about certain issues or hardware matured. In DAC-5, the Altair project began to consider mission architectures outside the Constellation baseline. Selecting the optimal life support system design is very sensitive to mission duration. When the mission goals and architecture change several trade studies must be conducted to determine the appropriate design. Finally, several areas of work developed through the Altair project may be applicable to other vehicle concepts for microgravity missions. Maturing the Altair life support system related analysis, design, and requirements can provide important information for developers of a wide range of other human vehicles.

  2. Test-retest reliability of functional connectivity networks during naturalistic fMRI paradigms.

    PubMed

    Wang, Jiahui; Ren, Yudan; Hu, Xintao; Nguyen, Vinh Thai; Guo, Lei; Han, Junwei; Guo, Christine Cong

    2017-04-01

    Functional connectivity analysis has become a powerful tool for probing the human brain function and its breakdown in neuropsychiatry disorders. So far, most studies adopted resting-state paradigm to examine functional connectivity networks in the brain, thanks to its low demand and high tolerance that are essential for clinical studies. However, the test-retest reliability of resting-state connectivity measures is moderate, potentially due to its low behavioral constraint. On the other hand, naturalistic neuroimaging paradigms, an emerging approach for cognitive neuroscience with high ecological validity, could potentially improve the reliability of functional connectivity measures. To test this hypothesis, we characterized the test-retest reliability of functional connectivity measures during a natural viewing condition, and benchmarked it against resting-state connectivity measures acquired within the same functional magnetic resonance imaging (fMRI) session. We found that the reliability of connectivity and graph theoretical measures of brain networks is significantly improved during natural viewing conditions over resting-state conditions, with an average increase of almost 50% across various connectivity measures. Not only sensory networks for audio-visual processing become more reliable, higher order brain networks, such as default mode and attention networks, but also appear to show higher reliability during natural viewing. Our results support the use of natural viewing paradigms in estimating functional connectivity of brain networks, and have important implications for clinical application of fMRI. Hum Brain Mapp 38:2226-2241, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Sleep-monitoring, experiment M133. [electronic recording system for automatic analysis of human sleep patterns

    NASA Technical Reports Server (NTRS)

    Frost, J. D., Jr.; Salamy, J. G.

    1973-01-01

    The Skylab sleep-monitoring experiment simulated the timelines and environment expected during a 56-day Skylab mission. Two crewmembers utilized the data acquisition and analysis hardware, and their sleep characteristics were studied in an online fashion during a number of all night recording sessions. Comparison of the results of online automatic analysis with those of postmission visual data analysis was favorable, confirming the feasibility of obtaining reliable objective information concerning sleep characteristics during the Skylab missions. One crewmember exhibited definite changes in certain sleep characteristics (e.g., increased sleep latency, increased time Awake during first third of night, and decreased total sleep time) during the mission.

  4. Human vs. Computer Diagnosis of Students' Natural Selection Knowledge: Testing the Efficacy of Text Analytic Software

    NASA Astrophysics Data System (ADS)

    Nehm, Ross H.; Haertig, Hendrik

    2012-02-01

    Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with equal fidelity as expert human scorers in a sample of >1,000 essays. We used SPSS Text Analysis 3.0 to perform our CAS and measure Kappa values (inter-rater reliability) of KC detection (i.e., computer-human rating correspondence). Our first analysis indicated that the text analysis functions (or extraction rules) developed and deployed in SPSSTA to extract individual Key Concepts (KCs) from three different items differing in several surface features (e.g., taxon, trait, type of evolutionary change) produced "substantial" (Kappa 0.61-0.80) or "almost perfect" (0.81-1.00) agreement. The second analysis explored the measurement of human-computer correspondence for KC diversity (the number of different accurate knowledge elements) in the combined sample of all 827 essays. Here we found outstanding correspondence; extraction rules generated using one prompt type are broadly applicable to other evolutionary scenarios (e.g., bacterial resistance, cheetah running speed, etc.). This result is encouraging, as it suggests that the development of new item sets may not necessitate the development of new text analysis rules. Overall, our findings suggest that CAS tools such as SPSS Text Analysis may compensate for some of the intrinsic limitations of currently used multiple-choice Concept Inventories designed to measure student knowledge of natural selection.

  5. Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.

    PubMed

    Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie

    2010-07-01

    Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    PubMed

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  7. Facial Aesthetic Outcomes of Cleft Surgery: Assessment of Discrete Lip and Nose Images Compared with Digital Symmetry Analysis.

    PubMed

    Deall, Ciara E; Kornmann, Nirvana S S; Bella, Husam; Wallis, Katy L; Hardwicke, Joseph T; Su, Ting-Li; Richard, Bruce M

    2016-10-01

    High-quality aesthetic outcomes are of paramount importance to children growing up after cleft lip and palate surgery. Establishing a validated and reliable assessment tool for cleft professionals and families will facilitate cleft units, surgeons, techniques, and protocols to be audited and compared with greater confidence. This study used exemplar images across a five-point aesthetic scale, identified in a pilot project, to score lips and noses as separate units and compared these human scores with computer-based SymNose symmetry scores. Forty-five assessors (17 cleft surgeons nationally and 28 other cleft professionals from the UK South West Tri-centre units), scored 25 standardized photographs, uploaded randomly onto a Web-based platform, twice. Each photograph was shown in three forms: lip and nose together, and separately cropped images of nose only and lip only. The same images were analyzed using the SymNose software program. Scoring lips gave the best intrarater and interrater reliabilities. Nose scores were more variable. Lip scoring associated most closely with the whole-image score. SymNose ranking of the lip images related highly to the same ranking by humans (p = 0.001). The exemplar images maintained their established previous ranking. Images illustrating the aesthetic outcome grades are confirmed. The lip score is reliable and seems to dominate in the whole-image score. Noses are much harder to score reliably. It appears that SymNose can score lip images very effectively by symmetry. Further use of SymNose will be investigated, and families of children with cleft will trial the scoring system. Therapeutic, III.

  8. Subtyping of Canadian isolates of Salmonella Enteritidis using Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) alone and in combination with Pulsed-Field Gel Electrophoresis (PFGE) and phage typing.

    PubMed

    Ziebell, Kim; Chui, Linda; King, Robin; Johnson, Suzanne; Boerlin, Patrick; Johnson, Roger P

    2017-08-01

    Salmonella enterica subspecies enterica serovar Enteritidis (SE) is one of the most common causes of human salmonellosis and in Canada currently accounts for over 40% of human cases. Reliable subtyping of isolates is required for outbreak detection and source attribution. However, Pulsed-Field Gel Electrophoresis (PFGE), the current standard subtyping method for Salmonella spp., is compromised by the high genetic homogeneity of SE. Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) was introduced to supplement PFGE, although there is a lack of data on the ability of MLVA to subtype Canadian isolates of SE. Three subtyping methods, PFGE, MLVA and phage typing were compared for their discriminatory power when applied to three panels of Canadian SE isolates: Panel 1: 70 isolates representing the diversity of phage types (PTs) and PFGE subtypes within these PTs; Panel 2: 214 apparently unrelated SE isolates of the most common PTs; and Panel 3: 27 isolates from 10 groups of epidemiologically related strains. For Panel 2 isolates, four MLVA subtypes were shared among 74% of unrelated isolates and in Panel 3 isolates, one MLVA subtype accounted for 62% of the isolates. For all panels, combining results from PFGE, MLVA and PT gave the best discrimination, except in Panel 1, where the combination of PT and PFGE was equally as high, due to the selection criteria for this panel. However, none of these methods is sufficiently discriminatory alone for reliable outbreak detection or source attribution, and must be applied together to achieve sufficient discrimination for practical purposes. Even then, some large clusters were not differentiated adequately. More discriminatory methods are required for reliable subtyping of this genetically highly homogeneous serovar. This need will likely be met by whole genome sequence analysis given the recent promising reports and as more laboratories implement this tool for outbreak response and surveillance. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Construction and validation of the midsagittal reference plane based on the skull base symmetry for three-dimensional cephalometric craniofacial analysis.

    PubMed

    Kim, Hak-Jin; Kim, Bong Chul; Kim, Jin-Geun; Zhengguo, Piao; Kang, Sang Hoon; Lee, Sang-Hwy

    2014-03-01

    The objective of this study was to determine the reliable midsagittal (MS) reference plane in practical ways for the three-dimensional craniofacial analysis on three-dimensional computed tomography images. Five normal human dry skulls and 20 normal subjects without any dysmorphoses or asymmetries were used. The accuracies and stability on repeated plane construction for almost every possible candidate MS plane based on the skull base structures were examined by comparing the discrepancies in distances and orientations from the reference points and planes of the skull base and facial bones on three-dimensional computed tomography images. The following reference points of these planes were stable, and their distribution was balanced: nasion and foramen cecum at the anterior part of the skull base, sella at the middle part, and basion and opisthion at the posterior part. The candidate reference planes constructed using the aforementioned reference points were thought to be reliable for use as an MS reference plane for the three-dimensional analysis of maxillofacial dysmorphosis.

  10. Fluorescent adduct formation with terbium: a novel strategy for transferrin glycoform identification in human body fluids and carbohydrate-deficient transferrin HPLC method validation.

    PubMed

    Sorio, Daniela; De Palo, Elio Franco; Bertaso, Anna; Bortolotti, Federica; Tagliaro, Franco

    2017-02-01

    This paper puts forward a new method for the transferrin (Tf) glycoform analysis in body fluids that involves the formation of a transferrin-terbium fluorescent adduct (TfFluo). The key idea is to validate the analytical procedure for carbohydrate-deficient transferrin (CDT), a traditional biochemical serum marker to identify chronic alcohol abuse. Terbium added to a human body-fluid sample produced TfFluo. Anion exchange HPLC technique, with fluorescence detection (λ exc 298 nm and λ em 550 nm), permitted clear separation and identification of Tf glycoform peaks without any interfering signals, allowing selective Tf sialoforms analysis in human serum and body fluids (cadaveric blood, cerebrospinal fluid, and dried blood spots) hampered for routine test. Serum samples (n = 78) were analyzed by both traditional absorbance (Abs) and fluorescence (Fl) HPLC methods and CDT% levels demonstrated a significant correlation (p < 0.001 Pearson). Intra- and inter-runs CV% was 3.1 and 4.6%, respectively. The cut-off of 1.9 CDT%, related to the HPLC Abs proposed as the reference method, by interpolation in the correlation curve with the present method demonstrated a 1.3 CDT% cut-off. Method comparison by Passing-Bablok and Bland-Altman tests demonstrated Fl versus Abs agreement. In conclusion, the novel method is a reliable test for CDT% analysis and provides a substantial analytical improvement offering important advantages in terms of types of body fluid analysis. Its sensitivity and absence of interferences extend clinical applications being reliable for CDT assay on body fluids usually not suitable for routine test. Graphical Abstract The formation of a transferrin-terbium fluorescent adduct can be used to analyze the transferrin glycoforms. The HPLC method for carbohydrate-deficient transferrin (CDT%) measurement was validated and employed to determine the levels in different body fluids.

  11. Analysis of human serum phosphopeptidome by a focused database searching strategy.

    PubMed

    Zhu, Jun; Wang, Fangjun; Cheng, Kai; Song, Chunxia; Qin, Hongqiang; Hu, Lianghai; Figeys, Daniel; Ye, Mingliang; Zou, Hanfa

    2013-01-14

    As human serum is an important source for early diagnosis of many serious diseases, analysis of serum proteome and peptidome has been extensively performed. However, the serum phosphopeptidome was less explored probably because the effective method for database searching is lacking. Conventional database searching strategy always uses the whole proteome database, which is very time-consuming for phosphopeptidome search due to the huge searching space resulted from the high redundancy of the database and the setting of dynamic modifications during searching. In this work, a focused database searching strategy using an in-house collected human serum pro-peptidome target/decoy database (HuSPep) was established. It was found that the searching time was significantly decreased without compromising the identification sensitivity. By combining size-selective Ti (IV)-MCM-41 enrichment, RP-RP off-line separation, and complementary CID and ETD fragmentation with the new searching strategy, 143 unique endogenous phosphopeptides and 133 phosphorylation sites (109 novel sites) were identified from human serum with high reliability. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. The human hippocampus is not sexually-dimorphic: Meta-analysis of structural MRI volumes.

    PubMed

    Tan, Anh; Ma, Wenli; Vira, Amit; Marwha, Dhruv; Eliot, Lise

    2016-01-01

    Hippocampal atrophy is found in many psychiatric disorders that are more prevalent in women. Sex differences in memory and spatial skills further suggest that males and females differ in hippocampal structure and function. We conducted the first meta-analysis of male-female difference in hippocampal volume (HCV) based on published MRI studies of healthy participants of all ages, to test whether the structure is reliably sexually dimorphic. Using four search strategies, we collected 68 matched samples of males' and females' uncorrected HCVs (in 4418 total participants), and 36 samples of male and female HCVs (2183 participants) that were corrected for individual differences in total brain volume (TBV) or intracranial volume (ICV). Pooled effect sizes were calculated using a random-effects model for left, right, and bilateral uncorrected HCVs and for left and right HCVs corrected for TBV or ICV. We found that uncorrected HCV was reliably larger in males, with Hedges' g values of 0.545 for left hippocampus, 0.526 for right hippocampus, and 0.557 for bilateral hippocampus. Meta-regression revealed no effect of age on the sex difference in left, right, or bilateral HCV. In the subset of studies that reported it, both TBV (g=1.085) and ICV (g=1.272) were considerably larger in males. Accordingly, studies reporting HCVs corrected for individual differences in TBV or ICV revealed no significant sex differences in left and right HCVs (Hedges' g ranging from +0.011 to -0.206). In summary, we found that human males of all ages exhibit a larger HCV than females, but adjusting for individual differences in TBV or ICV results in no reliable sex difference. The frequent claim that women have a disproportionately larger hippocampus than men was not supported. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Access 5 - Step 1: Human Systems Integration Program Plan (HSIPP)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report describes the Human System Interface (HSI) analysis, design and test activities that will be performed to support the development of requirements and design guidelines to facilitate the incorporation of High Altitude Long Endurance (HALE) Remotely Operated Aircraft (ROA) at or above FL400 in the National Airspace System (NAS). These activities are required to support the design and development of safe, effective and reliable ROA operator and ATC interfaces. This plan focuses on the activities to be completed for Step 1 of the ACCESS 5 program. Updates to this document will be made for each of the four ACCESS 5 program steps.

  14. Tissue-based quantitative proteome analysis of human hepatocellular carcinoma using tandem mass tags.

    PubMed

    Megger, Dominik Andre; Rosowski, Kristin; Ahrens, Maike; Bracht, Thilo; Eisenacher, Martin; Schlaak, Jörg F; Weber, Frank; Hoffmann, Andreas-Claudius; Meyer, Helmut E; Baba, Hideo A; Sitek, Barbara

    2017-03-01

    Human hepatocellular carcinoma (HCC) is a severe malignant disease, and accurate and reliable diagnostic markers are still needed. This study was aimed for the discovery of novel marker candidates by quantitative proteomics. Proteomic differences between HCC and nontumorous liver tissue were studied by mass spectrometry. Among several significantly upregulated proteins, translocator protein 18 (TSPO) and Ras-related protein Rab-1A (RAB1A) were selected for verification by immunohistochemistry in an independent cohort. For RAB1A, a high accuracy for the discrimination of HCC and nontumorous liver tissue was observed. RAB1A was verified to be a potent biomarker candidate for HCC.

  15. Formal Techniques for Organization Analysis: Task and Resource Management

    DTIC Science & Technology

    1984-06-01

    typical approach has been to base new entities on stereotypical structures and make changes as problems are recognized. Clearly, this is not an...human resources; and provide the means to change and track all 4 L I _ _ _ ____ I I these parameters as they interact with each other and respond to...functioning under internal and external change . 3. Data gathering techniques to allow one to efficiently r,’lect reliable modeling parameters from

  16. Reexamining Computational Support for Intelligence Analysis: A Functional Design for a Future Capability

    DTIC Science & Technology

    2016-07-14

    applicability of the sensor model in the context under consideration. A similar information flow can be considered for obtaining direct reliability of an... Modeling , Bex Concepts Human Intelligence Simulation USE CASES Army: Opns in Megacities, Syrian Civil War Navy: Piracy (NATO, Book), Autonomous ISR...2007) 6 [25] Bex, F. and Verheij, B ., Story Schemes for Argumentation about the Facts of a Crime, Computational Models of Narrative: Papers from the

  17. Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Ely, Jeffry W.

    2012-01-01

    A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.

  18. Evaluation of RNA from human trabecular bone and identification of stable reference genes.

    PubMed

    Cepollaro, Simona; Della Bella, Elena; de Biase, Dario; Visani, Michela; Fini, Milena

    2018-06-01

    The isolation of good quality RNA from tissues is an essential prerequisite for gene expression analysis to study pathophysiological processes. This study evaluated the RNA isolated from human trabecular bone and defined a set of stable reference genes. After pulverization, RNA was extracted with a phenol/chloroform method and then purified using silica columns. The A260/280 ratio, A260/230 ratio, RIN, and ribosomal ratio were measured to evaluate RNA quality and integrity. Moreover, the expression of six candidates was analyzed by qPCR and different algorithms were applied to assess reference gene stability. A good purity and quality of RNA was achieved according to A260/280 and A260/230 ratios, and RIN values. TBP, YWHAZ, and PGK1 were the most stable reference genes that should be used for gene expression analysis. In summary, the method proposed is suitable for gene expression evaluation in human bone and a set of reliable reference genes has been identified. © 2017 Wiley Periodicals, Inc.

  19. Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.

    2005-01-01

    An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  20. Revisiting the Serotonin-Aggression Relation in Humans: A Meta-analysis

    PubMed Central

    Duke, Aaron A.; Bègue, Laurent; Bell, Rob; Eisenlohr-Moul, Tory

    2013-01-01

    The inverse relation between serotonin and human aggression is often portrayed as “reliable,” “strong,” and “well-established” despite decades of conflicting reports and widely recognized methodological limitations. In this systematic review and meta-analysis we evaluate the evidence for and against the serotonin deficiency hypothesis of human aggression across four methods of assessing serotonin: (a) cerebrospinal fluid levels of 5-hydroxyindoleacetic acid (CSF 5-HIAA), (b) acute tryptophan depletion, (c) pharmacological challenge, and (d) endocrine challenge. Results across 175 independent samples and over 6,500 total participants were heterogeneous, but, in aggregate, revealed a small, inverse correlation between central serotonin functioning and aggression, anger, and hostility, r = −.12. Pharmacological challenge studies had the largest mean weighted effect size, r = −.21, and CSF 5-HIAA studies had the smallest, r = −.06, p = .21. Potential methodological and demographic moderators largely failed to account for variability in study outcomes. Notable exceptions included year of publication (effect sizes tended to diminish with time) and self-versus other-reported aggression (other-reported aggression was positively correlated to serotonin functioning). We discuss four possible explanations for the pattern of findings: unreliable measures, ambient correlational noise, an unidentified higher-order interaction, and a selective serotonergic effect. Finally, we provide four recommendations for bringing much needed clarity to this important area of research: acknowledge contradictory findings and avoid selective reporting practices; focus on improving the reliability and validity of serotonin and aggression measures; test for interactions involving personality and/or environmental moderators; and revise the serotonin deficiency hypothesis to account for serotonin’s functional complexity. PMID:23379963

  1. Cultural competency assessment tool for hospitals: evaluating hospitals' adherence to the culturally and linguistically appropriate services standards.

    PubMed

    Weech-Maldonado, Robert; Dreachslin, Janice L; Brown, Julie; Pradhan, Rohit; Rubin, Kelly L; Schiller, Cameron; Hays, Ron D

    2012-01-01

    The U.S. national standards for culturally and linguistically appropriate services (CLAS) in health care provide guidelines on policies and practices aimed at developing culturally competent systems of care. The Cultural Competency Assessment Tool for Hospitals (CCATH) was developed as an organizational tool to assess adherence to the CLAS standards. First, we describe the development of the CCATH and estimate the reliability and validity of the CCATH measures. Second, we discuss the managerial implications of the CCATH as an organizational tool to assess cultural competency. We pilot tested an initial draft of the CCATH, revised it based on a focus group and cognitive interviews, and then administered it in a field test with a sample of California hospitals. The reliability and validity of the CCATH were evaluated using factor analysis, analysis of variance, and Cronbach's alphas. Exploratory and confirmatory factor analyses identified 12 CCATH composites: leadership and strategic planning, data collection on inpatient population, data collection on service area, performance management systems and quality improvement, human resources practices, diversity training, community representation, availability of interpreter services, interpreter services policies, quality of interpreter services, translation of written materials, and clinical cultural competency practices. All the CCATH scales had internal consistency reliability of .65 or above, and the reliability was .70 or above for 9 of the 12 scales. Analysis of variance results showed that not-for-profit hospitals have higher CCATH scores than for-profit hospitals in five CCATH scales and higher CCATH scores than government hospitals in two CCATH scales. The CCATH showed adequate psychometric properties. Managers and policy makers can use the CCATH as a tool to evaluate hospital performance in cultural competency and identify and target improvements in hospital policies and practices that undergird the provision of CLAS.

  2. And the Humans Save the Day or Maybe They Ruin It: The Importance of Humans in the Loop

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Boyer, Roger; Bigler, Mark

    2017-01-01

    Flying a mission in space requires a massive commitment of resources, and without the talent and commitment of the people involved in this effort we would never leave the atmosphere of Earth. When we use the phrase "humans in the loop", it could apply to almost any endeavor since everything starts with humans developing a concept, completing the design process, building or implementing a product and using the product to achieve a goal or purpose. Narrowing the focus to spaceflights, there are a variety of individuals involved throughout the preparations for flight and the flight itself. All of the humans involved add value and support for program success. The purpose of this paper focuses on how a Probabilistic Risk Assessment (PRA) accounts for the human in the loop for potential missions using a technique called Human Reliability Analysis (HRA). Human actions can increase or decrease the overall risk via initiating events or mitigating them, thus removing the human from the loop doesn't always lower the risk.

  3. Parts quality management: Direct part marking of data matrix symbol for mission assurance

    NASA Astrophysics Data System (ADS)

    Moss, Chantrice; Chakrabarti, Suman; Scott, David W.

    A United States Government Accountability Office (GAO) review of twelve NASA programs found widespread parts quality problems contributing to significant cost overruns, schedule delays, and reduced system reliability. Direct part marking with Data Matrix symbols could significantly improve the quality of inventory control and parts lifecycle management. This paper examines the feasibility of using direct part marking technologies for use in future NASA programs. A structural analysis is based on marked material type, operational environment (e.g., ground, suborbital, Low Earth Orbit), durability of marks, ease of operation, reliability, and affordability. A cost-benefits analysis considers marking technology (label printing, data plates, and direct part marking) and marking types (two-dimensional machine-readable, human-readable). Previous NASA parts marking efforts and historical cost data are accounted for, including in-house vs. outsourced marking. Some marking methods are still under development. While this paper focuses on NASA programs, results may be applicable to a variety of industrial environments.

  4. Parts Quality Management: Direct Part Marking of Data Matrix Symbol for Mission Assurance

    NASA Technical Reports Server (NTRS)

    Moss, Chantrice; Chakrabarti, Suman; Scott, David W.

    2013-01-01

    A United States Government Accountability Office (GAO) review of twelve NASA programs found widespread parts quality problems contributing to significant cost overruns, schedule delays, and reduced system reliability. Direct part marking with Data Matrix symbols could significantly improve the quality of inventory control and parts lifecycle management. This paper examines the feasibility of using direct part marking technologies for use in future NASA programs. A structural analysis is based on marked material type, operational environment (e.g., ground, suborbital, Low Earth Orbit), durability of marks, ease of operation, reliability, and affordability. A cost-benefits analysis considers marking technology (label printing, data plates, and direct part marking) and marking types (two-dimensional machine-readable, human-readable). Previous NASA parts marking efforts and historical cost data are accounted for, including inhouse vs. outsourced marking. Some marking methods are still under development. While this paper focuses on NASA programs, results may be applicable to a variety of industrial environments.

  5. [Nursery Teacher's Stress Scale (NTSS): reliability and validity].

    PubMed

    Akada, Taro

    2010-06-01

    This study describes the development and evaluation of the Nursery Teacher's Stress Scale (NTSS), which explores the relation between daily hassles at work and work-related stress. In Analysis 1, 29 items were chosen to construct the NTSS. Six factors were identified: I. Stress relating to child care; II. Stress from human relations at work; III. Stress from staff-parent relations; IV. Stress from lack of time; V. Stress relating to compensation; and VI. Stress from the difference between individual beliefs and school policy. All these factors had high degrees of internal consistency. In Analysis 2, the concurrent validity of the NTSS was examined. The results showed that the NTSS total scores were significantly correlated with the Job Stress Scale-Revised Version (job stressor scale, r = .68), the Pre-school Teacher-efficacy Scale (r = -.21), and the WHO-five Well-Being Index Japanese Version (r = -.40). Work stresses are affected by several daily hassles at work. The NTSS has acceptable reliability and validity, and can be used to improve nursery teacher's mental health.

  6. Local connectome phenotypes predict social, health, and cognitive factors

    PubMed Central

    Powell, Michael A.; Garcia, Javier O.; Yeh, Fang-Cheng; Vettel, Jean M.

    2018-01-01

    The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample (N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions. PMID:29911679

  7. Local connectome phenotypes predict social, health, and cognitive factors.

    PubMed

    Powell, Michael A; Garcia, Javier O; Yeh, Fang-Cheng; Vettel, Jean M; Verstynen, Timothy

    2018-01-01

    The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample ( N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions.

  8. Pilots of the future - Human or computer?

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Nagel, D. C.

    1985-01-01

    In connection with the occurrence of aircraft accidents and the evolution of the air-travel system, questions arise regarding the computer's potential for making fundamental contributions to improving the safety and reliability of air travel. An important result of an analysis of the causes of aircraft accidents is the conclusion that humans - 'pilots and other personnel' - are implicated in well over half of the accidents which occur. Over 70 percent of the incident reports contain evidence of human error. In addition, almost 75 percent show evidence of an 'information-transfer' problem. Thus, the question arises whether improvements in air safety could be achieved by removing humans from control situations. In an attempt to answer this question, it is important to take into account also certain advantages which humans have in comparison to computers. Attention is given to human error and the effects of technology, the motivation to automate, aircraft automation at the crossroads, the evolution of cockpit automation, and pilot factors.

  9. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this technology to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

  10. Validity and reliability of a health care service evaluation instrument for tuberculosis

    PubMed Central

    Scatena, Lucia Marina; Wysocki, Anneliese Domingues; Beraldo, Aline Ale; Magnabosco, Gabriela Tavares; Brunello, Maria Eugênia Firmino; Netto, Antonio Ruffino; Nogueira, Jordana de Almeida; Silva, Reinaldo Antonio; Brito, Ewerton William Gomes; Alexandre, Patricia Borges Dias; Monroe, Aline Aparecida; Villa, Tereza Cristina Scatena

    2015-01-01

    OBJECTIVE To evaluate the validity and reliability of an instrument that evaluates the structure of primary health care units for the treatment of tuberculosis. METHODS This cross-sectional study used simple random sampling and evaluated 1,037 health care professionals from five Brazilian municipalities (Natal, state of Rio Grande do Norte; Cabedelo, state of Paraíba; Foz do Iguaçu, state of Parana; Sao José do Rio Preto, state of Sao Paulo, and Uberaba, state of Minas Gerais) in 2011. Structural indicators were identified and validated, considering different methods of organization of the health care system in the municipalities of different population sizes. Each structure represented the organization of health care services and contained the resources available for the execution of health care services: physical resources (equipment, consumables, and facilities); human resources (number and qualification); and resources for maintenance of the existing infrastructure and technology (deemed as the organization of health care services). The statistical analyses used in the validation process included reliability analysis, exploratory factor analysis, and confirmatory factor analysis. RESULTS The validation process indicated the retention of five factors, with 85.9% of the total variance explained, internal consistency between 0.6460 and 0.7802, and quality of fit of the confirmatory factor analysis of 0.995 using the goodness-of-fit index. The retained factors comprised five structural indicators: professionals involved in the care of tuberculosis patients, training, access to recording instruments, availability of supplies, and coordination of health care services with other levels of care. Availability of supplies had the best performance and the lowest coefficient of variation among the services evaluated. The indicators of assessment of human resources and coordination with other levels of care had satisfactory performance, but the latter showed the highest coefficient of variation. The performance of the indicators “training” and “access to recording instruments” was inferior to that of other indicators. CONCLUSIONS The instrument showed feasibility of application and potential to assess the structure of primary health care units for the treatment of tuberculosis. PMID:25741651

  11. Functional Interaction Network Construction and Analysis for Disease Discovery.

    PubMed

    Wu, Guanming; Haw, Robin

    2017-01-01

    Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.

  12. A new real-time visual assessment method for faulty movement patterns during a jump-landing task.

    PubMed

    Rabin, Alon; Levi, Ran; Abramowitz, Shai; Kozol, Zvi

    2016-07-01

    Determine the interrater reliability of a new real-time assessment of faulty movement patterns during a jump-landing task. Interrater reliability study. Human movement laboratory. 50 healthy females. Assessment included 6 items which were evaluated from a front and a side view. Two Physical Therapy students used a 9-point scale (0-8) to independently rate the quality of movement as good (0-2), moderate (3-5), or poor (6-8). Interrater reliability was expressed by percent agreement and weighted kappa. One examiner rated the quality of movement of 6 subjects as good, 34 subjects as moderate, and 10 subjects as poor. The second examiner rated the quality of movement of 12 subjects as good, 23 subjects as moderate, and 15 subjects as poor. Percent agreement and weighted kappa (95% confidence interval) were 78% and 0.68 (0.51, 0.85), respectively. A new real-time assessment of faulty movement patterns during jump-landing demonstrated adequate interrater reliability. Further study is warranted to validate this method against a motion analysis system, as well as to establish its predictive validity for injury. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Implementation of a personnel reliability program as a facilitator of biosafety and biosecurity culture in BSL-3 and BSL-4 laboratories.

    PubMed

    Higgins, Jacki J; Weaver, Patrick; Fitch, J Patrick; Johnson, Barbara; Pearl, R Marene

    2013-06-01

    In late 2010, the National Biodefense Analysis and Countermeasures Center (NBACC) implemented a Personnel Reliability Program (PRP) with the goal of enabling active participation by its staff to drive and improve the biosafety and biosecurity culture at the organization. A philosophical keystone for accomplishment of NBACC's scientific mission is simultaneous excellence in operations and outreach. Its personnel reliability program builds on this approach to: (1) enable and support a culture of responsibility based on human performance principles, (2) maintain compliance with regulations, and (3) address the risk associated with the insider threat. Recently, the Code of Federal Regulations (CFR) governing use and possession of biological select agents and toxins (BSAT) was amended to require a pre-access suitability assessment and ongoing evaluation for staff accessing Tier 1 BSAT. These 2 new requirements are in addition to the already required Federal Bureau of Investigation (FBI) Security Risk Assessment (SRA). Two years prior to the release of these guidelines, NBACC developed its PRP to supplement the SRA requirement as a means to empower personnel and foster an operational environment where any and all work with BSAT is conducted in a safe, secure, and reliable manner.

  14. Behavior and neural basis of near-optimal visual search

    PubMed Central

    Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre

    2013-01-01

    The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276

  15. Human Rights Attitude Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Ercan, Recep; Yaman, Tugba; Demir, Selcuk Besir

    2015-01-01

    The objective of this study is to develop a valid and reliable attitude scale having quality psychometric features that can measure secondary school students' attitudes towards human rights. The study group of the research is comprised by 710 6th, 7th and 8th grade students who study at 4 secondary schools in the centre of Sivas. The study group…

  16. Development of a quantitative validation method for forensic investigation of human spermatozoa using a commercial fluorescence staining kit (SPERM HY-LITER™ Express).

    PubMed

    Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko

    2016-11-01

    In investigations of sexual assaults, as well as in identifying a suspect, the detection of human sperm is important. Recently, a kit for fluorescent staining of human spermatozoa, SPERM HY-LITER™, has become available. This kit allows for microscopic observation of the heads of human sperm using an antibody tagged with a fluorescent dye. This kit is specific to human sperm and provides easy detection by luminescence. However, criteria need to be established to objectively evaluate the fluorescent signals and to evaluate the staining efficiency of this kit. These criteria will be indispensable for investigation of forensic samples. In the present study, the SPERM HY-LITER™ Express kit, which is an improved version of SPERM HY-LITER™, was evaluated using an image analysis procedure using Laplacian and Gaussian methods. This method could be used to automatically select important regions of fluorescence produced by sperm. The fluorescence staining performance was evaluated and compared under various experimental conditions, such as for aged traces and in combination with other chemical staining methods. The morphological characteristics of human sperm were incorporated into the criteria for objective identification of sperm, based on quantified features of the fluorescent spots. Using the criteria, non-specific or insignificant fluorescent spots were excluded, and the specificity of the kit for human sperm was confirmed. The image analysis method and criteria established in this study are universal and could be applied under any experimental conditions. These criteria will increase the reliability of operator judgment in the analysis of human sperm samples in forensics.

  17. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    PubMed

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Wind energy Computerized Maintenance Management System (CMMS) : data collection recommendations for reliability analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, Valerie A.; Ogilvie, Alistair B.

    2012-01-01

    This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific data recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of operating wind turbines. This report is intended to help develop a basic understanding of the data needed for reliability analysis frommore » a Computerized Maintenance Management System (CMMS) and other data systems. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and analysis and reporting needs. The 'Motivation' section of this report provides a rationale for collecting and analyzing field data for reliability analysis. The benefits of this type of effort can include increased energy delivered, decreased operating costs, enhanced preventive maintenance schedules, solutions to issues with the largest payback, and identification of early failure indicators.« less

  19. Developing a model for hospital inherent safety assessment: Conceptualization and validation.

    PubMed

    Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed

    2018-01-01

    Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.

  20. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  1. Automating Content Analysis of Open-Ended Responses: Wordscores and Affective Intonation

    PubMed Central

    Baek, Young Min; Cappella, Joseph N.; Bindman, Alyssa

    2014-01-01

    This study presents automated methods for predicting valence and quantifying valenced thoughts of a text. First, it examines whether Wordscores, developed by Laver, Benoit, and Garry (2003), can be adapted to reliably predict the valence of open-ended responses in a survey about bioethical issues in genetics research, and then tests a complementary and novel technique for coding the number of valenced thoughts in open-ended responses, termed Affective Intonation. Results show that Wordscores successfully predicts the valence of brief and grammatically imperfect open-ended responses, and Affective Intonation achieves comparable performance to human coders when estimating number of valenced thoughts. Both Wordscores and Affective Intonation have promise as reliable, effective, and efficient methods when researchers content-analyze large amounts of textual data systematically. PMID:25558294

  2. Development of the Military Women's Attitudes Toward Menstrual Suppression Scale: from construct definition to pilot testing.

    PubMed

    Trego, Lori L

    2009-01-01

    The Military Women's Attitudes Toward Menstrual Suppression scale (MWATMS) was created to measure attitudes toward menstrual suppression during deployment. The human health and social ecology theories were integrated to conceptualize an instrument that accounts for military-unique aspects of the environment on attitudes toward suppression. A three-step instrument development process was followed to develop the MWATMS. The instrument was pilot tested on a convenience sample of 206 military women with deployment experience. Reliability was tested with measures of internal consistency (alpha = .97); validity was tested with principal components analysis with varimax rotation. Four components accounted for 65% of variance: Benefits/Interest, Hygiene, Convenience, and Soldier/Stress. The pilot test of the MWATMS supported its reliability and validity. Further testing is warranted for validation of this instrument.

  3. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  4. A multiplexed chip-based assay system for investigating the functional development of human skeletal myotubes in vitro.

    PubMed

    Smith, A S T; Long, C J; Pirozzi, K; Najjar, S; McAleer, C; Vandenburgh, H H; Hickman, J J

    2014-09-20

    This report details the development of a non-invasive in vitro assay system for investigating the functional maturation and performance of human skeletal myotubes. Data is presented demonstrating the survival and differentiation of human myotubes on microscale silicon cantilevers in a defined, serum-free system. These cultures can be stimulated electrically and the resulting contraction quantified using modified atomic force microscopy technology. This system provides a higher degree of sensitivity for investigating contractile waveforms than video-based analysis, and represents the first system capable of measuring the contractile activity of individual human muscle myotubes in a reliable, high-throughput and non-invasive manner. The development of such a technique is critical for the advancement of body-on-a-chip platforms toward application in pre-clinical drug development screens. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Monkeys and humans take local uncertainty into account when localizing a change.

    PubMed

    Devkar, Deepna; Wright, Anthony A; Ma, Wei Ji

    2017-09-01

    Since sensory measurements are noisy, an observer is rarely certain about the identity of a stimulus. In visual perception tasks, observers generally take their uncertainty about a stimulus into account when doing so helps task performance. Whether the same holds in visual working memory tasks is largely unknown. Ten human and two monkey subjects localized a single change in orientation between a sample display containing three ellipses and a test display containing two ellipses. To manipulate uncertainty, we varied the reliability of orientation information by making each ellipse more or less elongated (two levels); reliability was independent across the stimuli. In both species, a variable-precision encoding model equipped with an "uncertainty-indifferent" decision rule, which uses only the noisy memories, fitted the data poorly. In both species, a much better fit was provided by a model in which the observer also takes the levels of reliability-driven uncertainty associated with the memories into account. In particular, a measured change in a low-reliability stimulus was given lower weight than the same change in a high-reliability stimulus. We did not find strong evidence that observers took reliability-independent variations in uncertainty into account. Our results illustrate the importance of studying the decision stage in comparison tasks and provide further evidence for evolutionary continuity of working memory systems between monkeys and humans.

  6. Monkeys and humans take local uncertainty into account when localizing a change

    PubMed Central

    Devkar, Deepna; Wright, Anthony A.; Ma, Wei Ji

    2017-01-01

    Since sensory measurements are noisy, an observer is rarely certain about the identity of a stimulus. In visual perception tasks, observers generally take their uncertainty about a stimulus into account when doing so helps task performance. Whether the same holds in visual working memory tasks is largely unknown. Ten human and two monkey subjects localized a single change in orientation between a sample display containing three ellipses and a test display containing two ellipses. To manipulate uncertainty, we varied the reliability of orientation information by making each ellipse more or less elongated (two levels); reliability was independent across the stimuli. In both species, a variable-precision encoding model equipped with an “uncertainty–indifferent” decision rule, which uses only the noisy memories, fitted the data poorly. In both species, a much better fit was provided by a model in which the observer also takes the levels of reliability-driven uncertainty associated with the memories into account. In particular, a measured change in a low-reliability stimulus was given lower weight than the same change in a high-reliability stimulus. We did not find strong evidence that observers took reliability-independent variations in uncertainty into account. Our results illustrate the importance of studying the decision stage in comparison tasks and provide further evidence for evolutionary continuity of working memory systems between monkeys and humans. PMID:28877535

  7. Constructing an integrated gene similarity network for the identification of disease genes.

    PubMed

    Tian, Zhen; Guo, Maozu; Wang, Chunyu; Xing, LinLin; Wang, Lei; Zhang, Yin

    2017-09-20

    Discovering novel genes that are involved human diseases is a challenging task in biomedical research. In recent years, several computational approaches have been proposed to prioritize candidate disease genes. Most of these methods are mainly based on protein-protein interaction (PPI) networks. However, since these PPI networks contain false positives and only cover less half of known human genes, their reliability and coverage are very low. Therefore, it is highly necessary to fuse multiple genomic data to construct a credible gene similarity network and then infer disease genes on the whole genomic scale. We proposed a novel method, named RWRB, to infer causal genes of interested diseases. First, we construct five individual gene (protein) similarity networks based on multiple genomic data of human genes. Then, an integrated gene similarity network (IGSN) is reconstructed based on similarity network fusion (SNF) method. Finally, we employee the random walk with restart algorithm on the phenotype-gene bilayer network, which combines phenotype similarity network, IGSN as well as phenotype-gene association network, to prioritize candidate disease genes. We investigate the effectiveness of RWRB through leave-one-out cross-validation methods in inferring phenotype-gene relationships. Results show that RWRB is more accurate than state-of-the-art methods on most evaluation metrics. Further analysis shows that the success of RWRB is benefited from IGSN which has a wider coverage and higher reliability comparing with current PPI networks. Moreover, we conduct a comprehensive case study for Alzheimer's disease and predict some novel disease genes that supported by literature. RWRB is an effective and reliable algorithm in prioritizing candidate disease genes on the genomic scale. Software and supplementary information are available at http://nclab.hit.edu.cn/~tianzhen/RWRB/ .

  8. The Development and Inter-Rater Reliability of the Department of Defense Human Factors Analysis and Classification System, Version 7.0

    DTIC Science & Technology

    2015-04-09

    Anthony Wurmstein, Lt Col Brian (Moose) Musselman, & Maj Alejandro Ramos (U.S. Air Force). Finally, Lt Col Mary E. Arnholt and Maj Ernest Herrera, Jr ...Wright- Patterson AFB, OH 45433-7913 Distribution A: Approved for public release; distribution is unlimited. Case Number: 88ABW-2015-2334, 12...School of Aerospace Medicine Aerospace Medicine Department Aerospace Education Branch 2510 Fifth St. Wright- Patterson AFB, OH 45433-7913 8

  9. Towards Tunable Consensus Clustering for Studying Functional Brain Connectivity During Affective Processing.

    PubMed

    Liu, Chao; Abu-Jamous, Basel; Brattico, Elvira; Nandi, Asoke K

    2017-03-01

    In the past decades, neuroimaging of humans has gained a position of status within neuroscience, and data-driven approaches and functional connectivity analyses of functional magnetic resonance imaging (fMRI) data are increasingly favored to depict the complex architecture of human brains. However, the reliability of these findings is jeopardized by too many analysis methods and sometimes too few samples used, which leads to discord among researchers. We propose a tunable consensus clustering paradigm that aims at overcoming the clustering methods selection problem as well as reliability issues in neuroimaging by means of first applying several analysis methods (three in this study) on multiple datasets and then integrating the clustering results. To validate the method, we applied it to a complex fMRI experiment involving affective processing of hundreds of music clips. We found that brain structures related to visual, reward, and auditory processing have intrinsic spatial patterns of coherent neuroactivity during affective processing. The comparisons between the results obtained from our method and those from each individual clustering algorithm demonstrate that our paradigm has notable advantages over traditional single clustering algorithms in being able to evidence robust connectivity patterns even with complex neuroimaging data involving a variety of stimuli and affective evaluations of them. The consensus clustering method is implemented in the R package "UNCLES" available on http://cran.r-project.org/web/packages/UNCLES/index.html .

  10. Relative and absolute reliability of measures of linoleic acid-derived oxylipins in human plasma.

    PubMed

    Gouveia-Figueira, Sandra; Bosson, Jenny A; Unosson, Jon; Behndig, Annelie F; Nording, Malin L; Fowler, Christopher J

    2015-09-01

    Modern analytical techniques allow for the measurement of oxylipins derived from linoleic acid in biological samples. Most validatory work has concerned extraction techniques, repeated analysis of aliquots from the same biological sample, and the influence of external factors such as diet and heparin treatment upon their levels, whereas less is known about the relative and absolute reliability of measurements undertaken on different days. A cohort of nineteen healthy males were used, where samples were taken at the same time of day on two occasions, at least 7 days apart. Relative reliability was assessed using Lin's concordance correlation coefficients (CCC) and intraclass correlation coefficients (ICC). Absolute reliability was assessed by Bland-Altman analyses. Nine linoleic acid oxylipins were investigated. ICC and CCC values ranged from acceptable (0.56 [13-HODE]) to poor (near zero [9(10)- and 12(13)-EpOME]). Bland-Altman limits of agreement were in general quite wide, ranging from ±0.5 (12,13-DiHOME) to ±2 (9(10)-EpOME; log10 scale). It is concluded that relative reliability of linoleic acid-derived oxylipins varies between lipids with compounds such as the HODEs showing better relative reliability than compounds such as the EpOMEs. These differences should be kept in mind when designing and interpreting experiments correlating plasma levels of these lipids with factors such as age, body mass index, rating scales etc. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Design and Application of the Exploration Maintainability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Stromgren, Chel; Terry, Michelle; Crillo, William; Goodliff, Kandyce; Maxwell, Andrew

    2012-01-01

    Conducting human exploration missions beyond Low Earth Orbit (LEO) will present unique challenges in the areas of supportability and maintainability. The durations of proposed missions can be relatively long and re-supply of logistics, including maintenance and repair items, will be limited or non-existent. In addition, mass and volume constraints in the transportation system will limit the total amount of logistics that can be flown along with the crew. These constraints will require that new strategies be developed with regards to how spacecraft systems are designed and maintained. NASA is currently developing Design Reference Missions (DRMs) as an initial step in defining future human missions. These DRMs establish destinations and concepts of operation for future missions, and begin to define technology and capability requirements. Because of the unique supportability challenges, historical supportability data and models are not directly applicable for establishing requirements for beyond LEO missions. However, supportability requirements could have a major impact on the development of the DRMs. The mass, volume, and crew resources required to support the mission could all be first order drivers in the design of missions, elements, and operations. Therefore, there is a need for enhanced analysis capabilities to more accurately establish mass, volume, and time requirements for supporting beyond LEO missions. Additionally, as new technologies and operations are proposed to reduce these requirements, it is necessary to have accurate tools to evaluate the efficacy of those approaches. In order to improve the analysis of supportability requirements for beyond LEO missions, the Space Missions Analysis Branch at the NASA Langley Research Center is developing the Exploration Maintainability Analysis Tool (EMAT). This tool is a probabilistic simulator that evaluates the need for repair and maintenance activities during space missions and the logistics and crew requirements to support those activities. Using a Monte Carlo approach, the tool simulates potential failures in defined systems, based on established component reliabilities, and then evaluates the capability of the crew to repair those failures given a defined store of spares and maintenance items. Statistical analysis of Monte Carlo runs provides probabilistic estimates of overall mission safety and reliability. This paper will describe the operation of the EMAT, including historical data sources used to populate the model, simulation processes, and outputs. Analysis results are provided for a candidate exploration system, including baseline estimates of required sparing mass and volume. Sensitivity analysis regarding the effectiveness of proposed strategies to reduce mass and volume requirements and improve mission reliability is included in these results.

  12. Identification of Shiga-Toxigenic Escherichia coli outbreak isolates by a novel data analysis tool after matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Christner, Martin; Dressler, Dirk; Andrian, Mark; Reule, Claudia; Petrini, Orlando

    2017-01-01

    The fast and reliable characterization of bacterial and fungal pathogens plays an important role in infectious disease control and tracking of outbreak agents. DNA based methods are the gold standard for epidemiological investigations, but they are still comparatively expensive and time-consuming. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is a fast, reliable and cost-effective technique now routinely used to identify clinically relevant human pathogens. It has been used for subspecies differentiation and typing, but its use for epidemiological tasks, e. g. for outbreak investigations, is often hampered by the complexity of data analysis. We have analysed publicly available MALDI-TOF mass spectra from a large outbreak of Shiga-Toxigenic Escherichia coli in northern Germany using a general purpose software tool for the analysis of complex biological data. The software was challenged with depauperate spectra and reduced learning group sizes to mimic poor spectrum quality and scarcity of reference spectra at the onset of an outbreak. With high quality formic acid extraction spectra, the software's built in classifier accurately identified outbreak related strains using as few as 10 reference spectra (99.8% sensitivity, 98.0% specificity). Selective variation of processing parameters showed impaired marker peak detection and reduced classification accuracy in samples with high background noise or artificially reduced peak counts. However, the software consistently identified mass signals suitable for a highly reliable marker peak based classification approach (100% sensitivity, 99.5% specificity) even from low quality direct deposition spectra. The study demonstrates that general purpose data analysis tools can effectively be used for the analysis of bacterial mass spectra.

  13. The most common technologies and tools for functional genome analysis.

    PubMed

    Gasperskaja, Evelina; Kučinskas, Vaidutis

    2017-01-01

    Since the sequence of the human genome is complete, the main issue is how to understand the information written in the DNA sequence. Despite numerous genome-wide studies that have already been performed, the challenge to determine the function of genes, gene products, and also their interaction is still open. As changes in the human genome are highly likely to cause pathological conditions, functional analysis is vitally important for human health. For many years there have been a variety of technologies and tools used in functional genome analysis. However, only in the past decade there has been rapid revolutionizing progress and improvement in high-throughput methods, which are ranging from traditional real-time polymerase chain reaction to more complex systems, such as next-generation sequencing or mass spectrometry. Furthermore, not only laboratory investigation, but also accurate bioinformatic analysis is required for reliable scientific results. These methods give an opportunity for accurate and comprehensive functional analysis that involves various fields of studies: genomics, epigenomics, proteomics, and interactomics. This is essential for filling the gaps in the knowledge about dynamic biological processes at both cellular and organismal level. However, each method has both advantages and limitations that should be taken into account before choosing the right method for particular research in order to ensure successful study. For this reason, the present review paper aims to describe the most frequent and widely-used methods for the comprehensive functional analysis.

  14. A Bayesian Framework for Analysis of Pseudo-Spatial Models of Comparable Engineered Systems with Application to Spacecraft Anomaly Prediction Based on Precedent Data

    NASA Astrophysics Data System (ADS)

    Ndu, Obibobi Kamtochukwu

    To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.

  15. Space Shuttle Reusable Solid Rocket Motor

    NASA Technical Reports Server (NTRS)

    Moore, Dennis; Phelps, Jack; Perkins, Fred

    2010-01-01

    RSRM is a highly reliable human-rated Solid Rocket Motor: a) Largest diameter SRM to achieve flight status; b) Only human-rated SRM. RSRM reliability achieved by: a)Applying special attention to Process Control, Testing, and Postflight; b) Communicating often; c) Identifying and addressing issues in a disciplined approach; d) Identifying and fully dispositioning "out-of-family" conditions; e) Addressing minority opinions; and f) Learning our lessons.

  16. Leptin- and Leptin Receptor-Deficient Rodent Models: Relevance for Human Type 2 Diabetes

    PubMed Central

    Wang, Bingxuan; P., Charukeshi Chandrasekera; Pippin, John J.

    2014-01-01

    Among the most widely used animal models in obesity-induced type 2 diabetes mellitus (T2DM) research are the congenital leptin- and leptin receptor-deficient rodent models. These include the leptin-deficient ob/ob mice and the leptin receptor-deficient db/db mice, Zucker fatty rats, Zucker diabetic fatty rats, SHR/N-cp rats, and JCR:LA-cp rats. After decades of mechanistic and therapeutic research schemes with these animal models, many species differences have been uncovered, but researchers continue to overlook these differences, leading to untranslatable research. The purpose of this review is to analyze and comprehensively recapitulate the most common leptin/leptin receptor-based animal models with respect to their relevance and translatability to human T2DM. Our analysis revealed that, although these rodents develop obesity due to hyperphagia caused by abnormal leptin/leptin receptor signaling with the subsequent appearance of T2DM-like manifestations, these are in fact secondary to genetic mutations that do not reflect disease etiology in humans, for whom leptin or leptin receptor deficiency is not an important contributor to T2DM. A detailed comparison of the roles of genetic susceptibility, obesity, hyperglycemia, hyperinsulinemia, insulin resistance, and diabetic complications as well as leptin expression, signaling, and other factors that confound translation are presented here. There are substantial differences between these animal models and human T2DM that limit reliable, reproducible, and translatable insight into human T2DM. Therefore, it is imperative that researchers recognize and acknowledge the limitations of the leptin/leptin receptor-based rodent models and invest in research methods that would be directly and reliably applicable to humans in order to advance T2DM management. PMID:24809394

  17. Leptin- and leptin receptor-deficient rodent models: relevance for human type 2 diabetes.

    PubMed

    Wang, Bingxuan; Chandrasekera, P Charukeshi; Pippin, John J

    2014-03-01

    Among the most widely used animal models in obesity-induced type 2 diabetes mellitus (T2DM) research are the congenital leptin- and leptin receptor-deficient rodent models. These include the leptin-deficient ob/ob mice and the leptin receptor-deficient db/db mice, Zucker fatty rats, Zucker diabetic fatty rats, SHR/N-cp rats, and JCR:LA-cp rats. After decades of mechanistic and therapeutic research schemes with these animal models, many species differences have been uncovered, but researchers continue to overlook these differences, leading to untranslatable research. The purpose of this review is to analyze and comprehensively recapitulate the most common leptin/leptin receptor-based animal models with respect to their relevance and translatability to human T2DM. Our analysis revealed that, although these rodents develop obesity due to hyperphagia caused by abnormal leptin/leptin receptor signaling with the subsequent appearance of T2DM-like manifestations, these are in fact secondary to genetic mutations that do not reflect disease etiology in humans, for whom leptin or leptin receptor deficiency is not an important contributor to T2DM. A detailed comparison of the roles of genetic susceptibility, obesity, hyperglycemia, hyperinsulinemia, insulin resistance, and diabetic complications as well as leptin expression, signaling, and other factors that confound translation are presented here. There are substantial differences between these animal models and human T2DM that limit reliable, reproducible, and translatable insight into human T2DM. Therefore, it is imperative that researchers recognize and acknowledge the limitations of the leptin/leptin receptor- based rodent models and invest in research methods that would be directly and reliably applicable to humans in order to advance T2DM management.

  18. Microtomography evaluation of dental tissue wear surface induced by in vitro simulated chewing cycles on human and composite teeth.

    PubMed

    Bedini, Rossella; Pecci, Raffaella; Notarangelo, Gianluca; Zuppante, Francesca; Persico, Salvatore; Di Carlo, Fabio

    2012-01-01

    In this study a 3D microtomography display of tooth surfaces after in vitro dental wear tests has been obtained. Natural teeth have been compared with prosthetic teeth, manufactured by three different polyceramic composite materials. The prosthetic dental element samples, similar to molars, have been placed in opposition to human teeth extracted by paradontology diseases. After microtomography analysis, samples have been subjected to in vitro fatigue test cycles by servo-hydraulic mechanical testing machine. After the fatigue test, each sample has been subjected again to microtomography analysis to obtain volumetric value changes and dental wear surface images. Wear surface images were obtained by 3D reconstruction software and volumetric value changes were measured by CT analyser software. The aim of this work has been to show the potential of microtomography technique to display very clear and reliable wear surface images. Microtomography analysis methods to evaluate volumetric value changes have been used to quantify dental tissue and composite material wear.

  19. Scientific information repository assisting reflectance spectrometry in legal medicine.

    PubMed

    Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W

    2012-06-01

    Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.

  20. And the Human Saves the Day or Maybe They Ruin It, The Importance of Humans in the Loop

    NASA Technical Reports Server (NTRS)

    DeMott, Diana L.; Boyer, Roger L.

    2017-01-01

    Flying a mission in space requires a massive commitment of resources, and without the talent and commitment of the people involved in this effort we would never leave the atmosphere of Earth as safely as we have. When we use the phrase "humans in the loop", it could apply to almost any endeavor since everything starts with humans developing a concept, completing the design process, building or implementing a product and using the product to achieve a goal or purpose. Narrowing the focus to spaceflight, there are a variety of individuals involved throughout the preparations for flight and the flight itself. All of the humans involved add value and support for program success. The paper discusses the concepts of human involvement in technological programs, how a Probabilistic Risk Assessment (PRA) accounts for the human in the loop for potential missions using a technique called Human Reliability Analysis (HRA) and the tradeoffs between having a human in the loop or not. Human actions can increase or decrease the overall risk via initiating events or mitigating them, thus removing the human from the loop doesn't always lowers the risk.

  1. Heritability and reliability of automatically segmented human hippocampal formation subregions

    PubMed Central

    Whelan, Christopher D.; Hibar, Derrek P.; van Velzen, Laura S.; Zannas, Anthony S.; Carrillo-Roa, Tania; McMahon, Katie; Prasad, Gautam; Kelly, Sinéad; Faskowitz, Joshua; deZubiracay, Greig; Iglesias, Juan E.; van Erp, Theo G.M.; Frodl, Thomas; Martin, Nicholas G.; Wright, Margaret J.; Jahanshad, Neda; Schmaal, Lianne; Sämann, Philipp G.; Thompson, Paul M.

    2016-01-01

    The human hippocampal formation can be divided into a set of cytoarchitecturally and functionally distinct subregions, involved in different aspects of memory formation. Neuroanatomical disruptions within these subregions are associated with several debilitating brain disorders including Alzheimer’s disease, major depression, schizophrenia, and bipolar disorder. Multi-center brain imaging consortia, such as the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) consortium, are interested in studying disease effects on these subregions, and in the genetic factors that affect them. For large-scale studies, automated extraction and subsequent genomic association studies of these hippocampal subregion measures may provide additional insight. Here, we evaluated the test–retest reliability and transplatform reliability (1.5 T versus 3 T) of the subregion segmentation module in the FreeSurfer software package using three independent cohorts of healthy adults, one young (Queensland Twins Imaging Study, N = 39), another elderly (Alzheimer’s Disease Neuroimaging Initiative, ADNI-2, N = 163) and another mixed cohort of healthy and depressed participants (Max Planck Institute, MPIP, N = 598). We also investigated agreement between the most recent version of this algorithm (v6.0) and an older version (v5.3), again using the ADNI-2 and MPIP cohorts in addition to a sample from the Netherlands Study for Depression and Anxiety (NESDA) (N = 221). Finally, we estimated the heritability (h2) of the segmented subregion volumes using the full sample of young, healthy QTIM twins (N = 728). Test–retest reliability was high for all twelve subregions in the 3 T ADNI-2 sample (intraclass correlation coefficient (ICC) = 0.70–0.97) and moderate-to-high in the 4 T QTIM sample (ICC = 0.5–0.89). Transplatform reliability was strong for eleven of the twelve subregions (ICC = 0.66–0.96); however, the hippocampal fissure was not consistently reconstructed across 1.5 T and 3 T field strengths (ICC = 0.47–0.57). Between-version agreement was moderate for the hippocampal tail, subiculum and presubiculum (ICC = 0.78–0.84; Dice Similarity Coefficient (DSC) = 0.55–0.70), and poor for all other subregions (ICC = 0.34–0.81; DSC = 0.28–0.51). All hippocampal subregion volumes were highly heritable (h2 = 0.67–0.91). Our findings indicate that eleven of the twelve human hippocampal subregions segmented using FreeSurfer version 6.0 may serve as reliable and informative quantitative phenotypes for future multi-site imaging genetics initiatives such as those of the ENIGMA consortium. PMID:26747746

  2. Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2010-01-01

    A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…

  3. Patient reported outcomes in GNE myopathy: incorporating a valid assessment of physical function in a rare disease.

    PubMed

    Slota, Christina; Bevans, Margaret; Yang, Li; Shrader, Joseph; Joe, Galen; Carrillo, Nuria

    2018-05-01

    The aim of this analysis was to evaluate the psychometric properties of three patient reported outcome (PRO) measures characterizing physical function in GNE myopathy: the Human Activity Profile, the Inclusion Body Myositis Functional Rating Scale, and the Activities-specific Balance Confidence scale. This analysis used data from 35 GNE myopathy subjects participating in a natural history study. For construct validity, correlational and known-group analyses were between the PROs and physical assessments. Reliability of the PROs between baseline and 6 months was evaluated using the intra-class correlation coefficient model; internal consistency was tested with Cronbach's alpha. The hypothesized moderate positive correlations for construct validity were supported; the strongest correlation was between the human activity profile adjusted activity score and the adult myopathy assessment endurance subscale score (r = 0.81; p < 0.0001). The PROs were able to discriminate between known high and low functioning groups for the adult myopathy assessment tool. Internal consistency of the PROs was high (α > 0.8) and there was strong reliability (ICC >0.62). The PROs are valid and reliable measures of physical function in GNE myopathy and should be incorporated in investigations to better understand the impact of progressive muscle weakness on physical function in this rare disease population. Implications for Rehabilitation GNE myopathy is a rare muscle disease that results in slow progressive muscle atrophy and weakness, ultimately leading to wheelchair use and dependence on a caregiver. There is limited knowledge on the impact of this disease on the health-related quality of life, specifically physical function, of this rare disease population. Three patient reported outcomes have been shown to be valid and reliable in GNE myopathy subjects and should be incorporated in future investigations to better understand how progressive muscle weakness impacts physical functions in this rare disease population. The patient reported outcome scores of GNE myopathy patients indicate a high risk for falls and impaired physical functioning, so it is important clinicians assess and provide interventions for these subjects to maintain their functional capacity.

  4. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  5. Measuring human remains in the field: Grid technique, total station, or MicroScribe?

    PubMed

    Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel

    2012-09-10

    Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Chapter 15: Reliability of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Shuangwen; O'Connor, Ryan

    The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability bymore » highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.« less

  7. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  8. High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.

    PubMed

    Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C

    2007-10-09

    High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.

  9. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A theoretical framework for negotiating the path of emergency management multi-agency coordination.

    PubMed

    Curnin, Steven; Owen, Christine; Paton, Douglas; Brooks, Benjamin

    2015-03-01

    Multi-agency coordination represents a significant challenge in emergency management. The need for liaison officers working in strategic level emergency operations centres to play organizational boundary spanning roles within multi-agency coordination arrangements that are enacted in complex and dynamic emergency response scenarios creates significant research and practical challenges. The aim of the paper is to address a gap in the literature regarding the concept of multi-agency coordination from a human-environment interaction perspective. We present a theoretical framework for facilitating multi-agency coordination in emergency management that is grounded in human factors and ergonomics using the methodology of core-task analysis. As a result we believe the framework will enable liaison officers to cope more efficiently within the work domain. In addition, we provide suggestions for extending the theory of core-task analysis to an alternate high reliability environment. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Dynamic of distribution of human bone marrow-derived mesenchymal stem cells after transplantation into adult unconditioned mice.

    PubMed

    Allers, Carolina; Sierralta, Walter D; Neubauer, Sonia; Rivera, Francisco; Minguell, José J; Conget, Paulette A

    2004-08-27

    The use of mesenchymal stem cells (MSC) for cell therapy relies on their capacity to engraft and survive long-term in the appropriate target tissue(s). Animal models have demonstrated that the syngeneic or xenogeneic transplantation of MSC results in donor engraftment into the bone marrow and other tissues of conditioned recipients. However, there are no reliable data showing the fate of human MSC infused into conditioned or unconditioned adult recipients. In the present study, the authors investigated, by using imaging, polymerase chain reaction (PCR), and in situ hybridization, the biodistribution of human bone marrow-derived MSC after intravenous infusion into unconditioned adult nude mice. As assessed by imaging (gamma camera), PCR, and in situ hybridization analysis, the authors' results demonstrate the presence of human MSC in bone marrow, spleen, and mesenchymal tissues of recipient mice. These results suggest that human MSC transplantation into unconditioned recipients represents an option for providing cellular therapy and avoids the complications associated with drugs or radiation conditioning.

  12. Evaluation of a computerized aid for creating human behavioral representations of human-computer interaction.

    PubMed

    Williams, Kent E; Voigt, Jeffrey R

    2004-01-01

    The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.

  13. Understanding Human Mobility from Twitter

    PubMed Central

    Jurdak, Raja; Zhao, Kun; Liu, Jiajun; AbouJaoude, Maurice; Cameron, Mark; Newth, David

    2015-01-01

    Understanding human mobility is crucial for a broad range of applications from disease prediction to communication networks. Most efforts on studying human mobility have so far used private and low resolution data, such as call data records. Here, we propose Twitter as a proxy for human mobility, as it relies on publicly available data and provides high resolution positioning when users opt to geotag their tweets with their current location. We analyse a Twitter dataset with more than six million geotagged tweets posted in Australia, and we demonstrate that Twitter can be a reliable source for studying human mobility patterns. Our analysis shows that geotagged tweets can capture rich features of human mobility, such as the diversity of movement orbits among individuals and of movements within and between cities. We also find that short- and long-distance movers both spend most of their time in large metropolitan areas, in contrast with intermediate-distance movers’ movements, reflecting the impact of different modes of travel. Our study provides solid evidence that Twitter can indeed be a useful proxy for tracking and predicting human movement. PMID:26154597

  14. Epigenetic features of human telomeres.

    PubMed

    Cubiles, María D; Barroso, Sonia; Vaquero-Sedas, María I; Enguix, Alicia; Aguilera, Andrés; Vega-Palas, Miguel A

    2018-03-16

    Although subtelomeric regions in humans are heterochromatic, the epigenetic nature of human telomeres remains controversial. This controversy might have been influenced by the confounding effect of subtelomeric regions and interstitial telomeric sequences (ITSs) on telomeric chromatin structure analyses. In addition, different human cell lines might carry diverse epigenetic marks at telomeres. We have developed a reliable procedure to study the chromatin structure of human telomeres independently of subtelomeres and ITSs. This procedure is based on the statistical analysis of multiple ChIP-seq experiments. We have found that human telomeres are not enriched in the heterochromatic H3K9me3 mark in most of the common laboratory cell lines, including embryonic stem cells. Instead, they are labeled with H4K20me1 and H3K27ac, which might be established by p300. These results together with previously published data argue that subtelomeric heterochromatin might control human telomere functions. Interestingly, U2OS cells that exhibit alternative lengthening of telomeres have heterochromatic levels of H3K9me3 in their telomeres.

  15. Observing Consistency in Online Communication Patterns for User Re-Identification.

    PubMed

    Adeyemi, Ikuesan Richard; Razak, Shukor Abd; Salleh, Mazleena; Venter, Hein S

    2016-01-01

    Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas.

  16. PopHuman: the human population genomics browser

    PubMed Central

    Mulet, Roger; Villegas-Mirón, Pablo; Hervas, Sergi; Sanz, Esteve; Velasco, Daniel; Bertranpetit, Jaume; Laayouni, Hafid

    2018-01-01

    Abstract The 1000 Genomes Project (1000GP) represents the most comprehensive world-wide nucleotide variation data set so far in humans, providing the sequencing and analysis of 2504 genomes from 26 populations and reporting >84 million variants. The availability of this sequence data provides the human lineage with an invaluable resource for population genomics studies, allowing the testing of molecular population genetics hypotheses and eventually the understanding of the evolutionary dynamics of genetic variation in human populations. Here we present PopHuman, a new population genomics-oriented genome browser based on JBrowse that allows the interactive visualization and retrieval of an extensive inventory of population genetics metrics. Efficient and reliable parameter estimates have been computed using a novel pipeline that faces the unique features and limitations of the 1000GP data, and include a battery of nucleotide variation measures, divergence and linkage disequilibrium parameters, as well as different tests of neutrality, estimated in non-overlapping windows along the chromosomes and in annotated genes for all 26 populations of the 1000GP. PopHuman is open and freely available at http://pophuman.uab.cat. PMID:29059408

  17. Epigenetic features of human telomeres

    PubMed Central

    Cubiles, María D; Barroso, Sonia; Vaquero-Sedas, María I; Enguix, Alicia; Aguilera, Andrés; Vega-Palas, Miguel A

    2018-01-01

    Abstract Although subtelomeric regions in humans are heterochromatic, the epigenetic nature of human telomeres remains controversial. This controversy might have been influenced by the confounding effect of subtelomeric regions and interstitial telomeric sequences (ITSs) on telomeric chromatin structure analyses. In addition, different human cell lines might carry diverse epigenetic marks at telomeres. We have developed a reliable procedure to study the chromatin structure of human telomeres independently of subtelomeres and ITSs. This procedure is based on the statistical analysis of multiple ChIP-seq experiments. We have found that human telomeres are not enriched in the heterochromatic H3K9me3 mark in most of the common laboratory cell lines, including embryonic stem cells. Instead, they are labeled with H4K20me1 and H3K27ac, which might be established by p300. These results together with previously published data argue that subtelomeric heterochromatin might control human telomere functions. Interestingly, U2OS cells that exhibit alternative lengthening of telomeres have heterochromatic levels of H3K9me3 in their telomeres. PMID:29361030

  18. One-year test-retest reliability of intrinsic connectivity network fMRI in older adults

    PubMed Central

    Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.

    2014-01-01

    “Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491

  19. The flaws and human harms of animal experimentation.

    PubMed

    Akhtar, Aysha

    2015-10-01

    Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.

  20. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  1. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    NASA Astrophysics Data System (ADS)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  2. Automated GC-MS analysis of free amino acids in biological fluids.

    PubMed

    Kaspar, Hannelore; Dettmer, Katja; Gronwald, Wolfram; Oefner, Peter J

    2008-07-15

    A gas chromatography-mass spectrometry (GC-MS) method was developed for the quantitative analysis of free amino acids as their propyl chloroformate derivatives in biological fluids. Derivatization with propyl chloroformate is carried out directly in the biological samples without prior protein precipitation or solid-phase extraction of the amino acids, thereby allowing automation of the entire procedure, including addition of reagents, extraction and injection into the GC-MS. The total analysis time was 30 min and 30 amino acids could be reliably quantified using 19 stable isotope-labeled amino acids as internal standards. Limits of detection (LOD) and lower limits of quantification (LLOQ) were in the range of 0.03-12 microM and 0.3-30 microM, respectively. The method was validated using a certified amino acid standard and reference plasma, and its applicability to different biological fluids was shown. Intra-day precision for the analysis of human urine, blood plasma, and cell culture medium was 2.0-8.8%, 0.9-8.3%, and 2.0-14.3%, respectively, while the inter-day precision for human urine was 1.5-14.1%.

  3. Molecular docking and 3D-QSAR studies on inhibitors of DNA damage signaling enzyme human PARP-1.

    PubMed

    Fatima, Sabiha; Bathini, Raju; Sivan, Sree Kanth; Manga, Vijjulatha

    2012-08-01

    Poly (ADP-ribose) polymerase-1 (PARP-1) operates in a DNA damage signaling network. Molecular docking and three dimensional-quantitative structure activity relationship (3D-QSAR) studies were performed on human PARP-1 inhibitors. Docked conformation obtained for each molecule was used as such for 3D-QSAR analysis. Molecules were divided into a training set and a test set randomly in four different ways, partial least square analysis was performed to obtain QSAR models using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA). Derived models showed good statistical reliability that is evident from their r², q²(loo) and r²(pred) values. To obtain a consensus for predictive ability from all the models, average regression coefficient r²(avg) was calculated. CoMFA and CoMSIA models showed a value of 0.930 and 0.936, respectively. Information obtained from the best 3D-QSAR model was applied for optimization of lead molecule and design of novel potential inhibitors.

  4. Roadmap to a Sustainable Structured Trusted Employee Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coates, Cameron W; Eisele, Gerhard R

    2013-08-01

    Organizations (facility, regulatory agency, or country) have a compelling interest in ensuring that individuals who occupy sensitive positions affording access to chemical biological, radiological and nuclear (CBRN) materials facilities and programs are functioning at their highest level of reliability. Human reliability and human performance relate not only to security but also focus on safety. Reliability has a logical and direct relationship to trustworthiness for the organization is placing trust in their employees to conduct themselves in a secure, safe, and dependable manner. This document focuses on providing an organization with a roadmap to implementing a successful and sustainable Structured Trustedmore » Employee Program (STEP).« less

  5. Quantitative architectural analysis: a new approach to cortical mapping.

    PubMed

    Schleicher, A; Palomero-Gallagher, N; Morosan, P; Eickhoff, S B; Kowalski, T; de Vos, K; Amunts, K; Zilles, K

    2005-12-01

    Recent progress in anatomical and functional MRI has revived the demand for a reliable, topographic map of the human cerebral cortex. Till date, interpretations of specific activations found in functional imaging studies and their topographical analysis in a spatial reference system are, often, still based on classical architectonic maps. The most commonly used reference atlas is that of Brodmann and his successors, despite its severe inherent drawbacks. One obvious weakness in traditional, architectural mapping is the subjective nature of localising borders between cortical areas, by means of a purely visual, microscopical examination of histological specimens. To overcome this limitation, more objective, quantitative mapping procedures have been established in the past years. The quantification of the neocortical, laminar pattern by defining intensity line profiles across the cortical layers, has a long tradition. During the last years, this method has been extended to enable a reliable, reproducible mapping of the cortex based on image analysis and multivariate statistics. Methodological approaches to such algorithm-based, cortical mapping were published for various architectural modalities. In our contribution, principles of algorithm-based mapping are described for cyto- and receptorarchitecture. In a cytoarchitectural parcellation of the human auditory cortex, using a sliding window procedure, the classical areal pattern of the human superior temporal gyrus was modified by a replacing of Brodmann's areas 41, 42, 22 and parts of area 21, with a novel, more detailed map. An extension and optimisation of the sliding window procedure to the specific requirements of receptorarchitectonic mapping, is also described using the macaque central sulcus and adjacent superior parietal lobule as a second, biologically independent example. Algorithm-based mapping procedures, however, are not limited to these two architectural modalities, but can be applied to all images in which a laminar cortical pattern can be detected and quantified, e.g. myeloarchitectonic and in vivo high resolution MR imaging. Defining cortical borders, based on changes in cortical lamination in high resolution, in vivo structural MR images will result in a rapid increase of our knowledge on the structural parcellation of the human cerebral cortex.

  6. NASA Advanced Exploration Systems: Advancements in Life Support Systems

    NASA Technical Reports Server (NTRS)

    Shull, Sarah A.; Schneider, Walter F.

    2016-01-01

    The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA’s Habitability Architecture Team.

  7. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  8. DNA data in criminal procedure in the European fundamental rights context.

    PubMed

    Soleto, Helena

    2014-01-01

    Despite being one of the most useful and reliable identification tools, DNA profiling in criminal procedure balances on the border between the limitation and violation of Fundamental Rights that can occur beginning with the collection of the sample, its analysis, and its use; and ending with its processing. Throughout this complex process, violation of human or fundamental rights -such as the right to physical and moral integrity, the right not to be subject to degrading treatment, the right not to incriminate oneself, the right to family privacy together with that of not incriminating descendants or relatives in general, the right to personal development and the right to informative self-determination- is possible. This article presents an analysis of all the above-mentioned DNA treating phases in criminal process in the light of possible violations of some Fundamental Rights, while at the same time discarding some of them on the basis of European human rights protection standards. As the case-law of the European Court of Human Rights shows, the legislation on DNA collection and DNA related data processing or its implementation does not always respect all human rights and should be carefully considered before its adoption and during its application.

  9. Performance characteristics of a visual-search human-model observer with sparse PET image data

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2012-02-01

    As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.

  10. Measurement of cognitive performance in computer programming concept acquisition: interactive effects of visual metaphors and the cognitive style construct.

    PubMed

    McKay, E

    2000-01-01

    An innovative research program was devised to investigate the interactive effect of instructional strategies enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style on the acquisition of programming concepts. The Cognitive Styles Analysis (CSA) program (Riding,1991) was used to establish the participants' cognitive style. The QUEST Interactive Test Analysis System (Adams and Khoo,1996) provided the cognitive performance measuring tool, which ensured an absence of error measurement in the programming knowledge testing instruments. Therefore, reliability of the instrumentation was assured through the calibration techniques utilized by the QUEST estimate; providing predictability of the research design. A means analysis of the QUEST data, using the Cohen (1977) approach to size effect and statistical power further quantified the significance of the findings. The experimental methodology adopted for this research links the disciplines of instructional science, cognitive psychology, and objective measurement to provide reliable mechanisms for beneficial use in the evaluation of cognitive performance by the education, training and development sectors. Furthermore, the research outcomes will be of interest to educators, cognitive psychologists, communications engineers, and computer scientists specializing in computer-human interactions.

  11. Development and validation of technique for in-vivo 3D analysis of cranial bone graft survival

    NASA Astrophysics Data System (ADS)

    Bernstein, Mark P.; Caldwell, Curtis B.; Antonyshyn, Oleh M.; Ma, Karen; Cooper, Perry W.; Ehrlich, Lisa E.

    1997-05-01

    Bone autografts are routinely employed in the reconstruction of facial deformities resulting from trauma, tumor ablation or congenital malformations. The combined use of post- operative 3D CT and SPECT imaging provides a means for quantitative in vivo evaluation of bone graft volume and osteoblastic activity. The specific objectives of this study were: (1) Determine the reliability and accuracy of interactive computer-assisted analysis of bone graft volumes based on 3D CT scans; (2) Determine the error in CT/SPECT multimodality image registration; (3) Determine the error in SPECT/SPECT image registration; and (4) Determine the reliability and accuracy of CT-guided SPECT uptake measurements in cranial bone grafts. Five human cadaver heads served as anthropomorphic models for all experiments. Four cranial defects were created in each specimen with inlay and onlay split skull bone grafts and reconstructed to skull and malar recipient sites. To acquire all images, each specimen was CT scanned and coated with Technetium doped paint. For purposes of validation, skulls were landmarked with 1/16-inch ball-bearings and Indium. This study provides a new technique relating anatomy and physiology for the analysis of cranial bone graft survival.

  12. Automated Computerized Analysis of Speechin Psychiatric Disorders

    PubMed Central

    Cohen, Alex S.; Elvevåg, Brita

    2014-01-01

    Purpose of Review Disturbances in communication are a hallmark of severe mental illnesses. Recent technological advances have paved the way for objectifying communication using automated computerized linguistic and acoustic analysis. We review recent studies applying various computer-based assessments to the natural language produced by adult patients with severe mental illness. Recent Findings Automated computerized methods afford tools with which it is possible to objectively evaluate patients in a reliable, valid and efficient manner that complements human ratings. Crucially, these measures correlate with important clinical measures. The clinical relevance of these novel metrics has been demonstrated by showing their relationship to functional outcome measures, their in vivo link to classic ‘language’ regions in the brain, and, in the case of linguistic analysis, their relationship to candidate genes for severe mental illness. Summary Computer based assessments of natural language afford a framework with which to measure communication disturbances in adults with SMI. Emerging evidence suggests that they can be reliable and valid, and overcome many practical limitations of more traditional assessment methods. The advancement of these technologies offers unprecedented potential for measuring and understanding some of the most crippling symptoms of some of the most debilitating illnesses known to humankind. PMID:24613984

  13. Automated data processing of { 1H-decoupled} 13C MR spectra acquired from human brain in vivo

    NASA Astrophysics Data System (ADS)

    Shic, Frederick; Ross, Brian

    2003-06-01

    In clinical 13C infusion studies, broadband excitation of 200 ppm of the human brain yields 13C MR spectra with a time resolution of 2-5 min and generates up to 2000 metabolite peaks over 2 h. We describe a fast, automated, observer-independent technique for processing { 1H-decoupled} 13C spectra. Quantified 13C spectroscopic signals, before and after the administration of [1- 13C]glucose and/or [1- 13C]acetate in human subjects are determined. Stepwise improvements of data processing are illustrated by examples of normal and pathological results. Variation in analysis of individual 13C resonances ranged between 2 and 14%. Using this method it is possible to reliably identify subtle metabolic effects of brain disease including Alzheimer's disease and epilepsy.

  14. Dental DNA fingerprinting in identification of human remains

    PubMed Central

    Girish, KL; Rahman, Farzan S; Tippu, Shoaib R

    2010-01-01

    The recent advances in molecular biology have revolutionized all aspects of dentistry. DNA, the language of life yields information beyond our imagination, both in health or disease. DNA fingerprinting is a tool used to unravel all the mysteries associated with the oral cavity and its manifestations during diseased conditions. It is being increasingly used in analyzing various scenarios related to forensic science. The technical advances in molecular biology have propelled the analysis of the DNA into routine usage in crime laboratories for rapid and early diagnosis. DNA is an excellent means for identification of unidentified human remains. As dental pulp is surrounded by dentin and enamel, which forms dental armor, it offers the best source of DNA for reliable genetic type in forensic science. This paper summarizes the recent literature on use of this technique in identification of unidentified human remains. PMID:21731342

  15. The Use of Empirical Data Sources in HRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruce Hallbert; David Gertman; Julie Marble

    This paper presents a review of available information related to human performance to support Human Reliability Analysis (HRA) performed for nuclear power plants (NPPs). A number of data sources are identified as potentially useful. These include NPP licensee event reports (LERs), augmented inspection team (AIT) reports, operator requalification data, results from the literature in experimental psychology, and the Aviation Safety Reporting System (ASRSs). The paper discusses how utilizing such information improves our capability to model and quantify human performance. In particular the paper discusses how information related to performance shaping factors (PSFs) can be extracted from empirical data to determinemore » their size effect, their relative effects, as well as their interactions. The paper concludes that appropriate use of existing sources can help addressing some of the important issues we are currently facing in HRA.« less

  16. Cuttlefish Sepia officinalis Preferentially Respond to Bottom Rather than Side Stimuli When Not Allowed Adjacent to Tank Walls

    DTIC Science & Technology

    2015-10-14

    especially because cuttlefish are colorblind. Another possible concern is the flicker frequency of the plasma screens. The plasma screens used in... flicker . Given these reasons, we believe that the most reliable behavior with the least amount of human disturbance resulted from our use of the...0138690 October 14, 2015 15 / 18 Image analysis To mitigate the likelihood of autocorrelation, we used the two usable (i.e., non-blurry) images that were

  17. Just add water: Accuracy of analysis of diluted human milk samples using mid-infrared spectroscopy.

    PubMed

    Smith, R W; Adamkin, D H; Farris, A; Radmacher, P G

    2017-01-01

    To determine the maximum dilution of human milk (HM) that yields reliable results for protein, fat and lactose when analyzed by mid-infrared spectroscopy. De-identified samples of frozen HM were obtained. Milk was thawed and warmed (40°C) prior to analysis. Undiluted (native) HM was analyzed by mid-infrared spectroscopy for macronutrient composition: total protein (P), fat (F), carbohydrate (C); Energy (E) was calculated from the macronutrient results. Subsequent analyses were done with 1 : 2, 1 : 3, 1 : 5 and 1 : 10 dilutions of each sample with distilled water. Additional samples were sent to a certified lab for external validation. Quantitatively, F and P showed statistically significant but clinically non-critical differences in 1 : 2 and 1 : 3 dilutions. Differences at higher dilutions were statistically significant and deviated from native values enough to render those dilutions unreliable. External validation studies also showed statistically significant but clinically unimportant differences at 1 : 2 and 1 : 3 dilutions. The Calais Human Milk Analyzer can be used with HM samples diluted 1 : 2 and 1 : 3 and return results within 5% of values from undiluted HM. At a 1 : 5 or 1 : 10 dilution, however, results vary as much as 10%, especially with P and F. At the 1 : 2 and 1 : 3 dilutions these differences appear to be insignificant in the context of nutritional management. However, the accuracy and reliability of the 1 : 5 and 1 : 10 dilutions are questionable.

  18. Identification of cardiac rhythm features by mathematical analysis of vector fields.

    PubMed

    Fitzgerald, Tamara N; Brooks, Dana H; Triedman, John K

    2005-01-01

    Automated techniques for locating cardiac arrhythmia features are limited, and cardiologists generally rely on isochronal maps to infer patterns in the cardiac activation sequence during an ablation procedure. Velocity vector mapping has been proposed as an alternative method to study cardiac activation in both clinical and research environments. In addition to the visual cues that vector maps can provide, vector fields can be analyzed using mathematical operators such as the divergence and curl. In the current study, conduction features were extracted from velocity vector fields computed from cardiac mapping data. The divergence was used to locate ectopic foci and wavefront collisions, and the curl to identify central obstacles in reentrant circuits. Both operators were applied to simulated rhythms created from a two-dimensional cellular automaton model, to measured data from an in situ experimental canine model, and to complex three-dimensional human cardiac mapping data sets. Analysis of simulated vector fields indicated that the divergence is useful in identifying ectopic foci, with a relatively small number of vectors and with errors of up to 30 degrees in the angle measurements. The curl was useful for identifying central obstacles in reentrant circuits, and the number of velocity vectors needed increased as the rhythm became more complex. The divergence was able to accurately identify canine in situ pacing sites, areas of breakthrough activation, and wavefront collisions. In data from human arrhythmias, the divergence reliably estimated origins of electrical activity and wavefront collisions, but the curl was less reliable at locating central obstacles in reentrant circuits, possibly due to the retrospective nature of data collection. The results indicate that the curl and divergence operators applied to velocity vector maps have the potential to add valuable information in cardiac mapping and can be used to supplement human pattern recognition.

  19. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  20. Zebrafish tracking using convolutional neural networks.

    PubMed

    Xu, Zhiping; Cheng, Xi En

    2017-02-17

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  1. Zebrafish tracking using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  2. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  3. The impact of Lean bundles on hospital performance: does size matter?

    PubMed

    Al-Hyari, Khalil; Abu Hammour, Sewar; Abu Zaid, Mohammad Khair Saleem; Haffar, Mohamed

    2016-10-10

    Purpose The purpose of this paper is to study the effect of the implementation of Lean bundles on hospital performance in private hospitals in Jordan and evaluate how much the size of organization can affect the relationship between Lean bundles implementation and hospital performance. Design/methodology/approach The research is considered as quantitative method (descriptive and hypothesis testing). Three statistical techniques were adopted to analyse the data. Structural equation modeling techniques and multi-group analysis were used to examine the research's hypothesis, and to perform the required statistical analysis of the data from the survey. Reliability analysis and confirmatory factor analysis were used to test the construct validity, reliability and measurement loadings that were performed. Findings Lean bundles have been identified as an effective approach that can dramatically improve the organizational performance of private hospitals in Jordan. Main Lean bundles - just in time, human resource management, and total quality management are applicable to large, small and medium hospitals without significant differences in advantages that depend on size. Originality/value According to the researchers' best knowledge, this is the first research that studies the impact of Lean bundles implementation in healthcare sector in Jordan. This research also makes a significant contribution for decision makers in healthcare to increase their awareness of Lean bundles.

  4. The PAXgene® Tissue System Preserves Phosphoproteins in Human Tissue Specimens and Enables Comprehensive Protein Biomarker Research

    PubMed Central

    Gündisch, Sibylle; Schott, Christina; Wolff, Claudia; Tran, Kai; Beese, Christian; Viertler, Christian; Zatloukal, Kurt; Becker, Karl-Friedrich

    2013-01-01

    Precise quantitation of protein biomarkers in clinical tissue specimens is a prerequisite for accurate and effective diagnosis, prognosis, and personalized medicine. Although progress is being made, protein analysis from formalin-fixed and paraffin-embedded tissues is still challenging. In previous reports, we showed that the novel formalin-free tissue preservation technology, the PAXgene Tissue System, allows the extraction of intact and immunoreactive proteins from PAXgene-fixed and paraffin-embedded (PFPE) tissues. In the current study, we focused on the analysis of phosphoproteins and the applicability of two-dimensional gel electrophoresis (2D-PAGE) and enzyme-linked immunosorbent assay (ELISA) to the analysis of a variety of malignant and non-malignant human tissues. Using western blot analysis, we found that phosphoproteins are quantitatively preserved in PFPE tissues, and signal intensities are comparable to that in paired, frozen tissues. Furthermore, proteins extracted from PFPE samples are suitable for 2D-PAGE and can be quantified by ELISA specific for denatured proteins. In summary, the PAXgene Tissue System reliably preserves phosphoproteins in human tissue samples, even after prolonged fixation or stabilization times, and is compatible with methods for protein analysis such as 2D-PAGE and ELISA. We conclude that the PAXgene Tissue System has the potential to serve as a versatile tissue fixative for modern pathology. PMID:23555997

  5. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.

    PubMed

    De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C

    2017-06-21

    How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.

  6. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  7. Preliminary Analysis of LORAN-C System Reliability for Civil Aviation.

    DTIC Science & Technology

    1981-09-01

    overviev of the analysis technique. Section 3 describes the computerized LORAN-C coverage model which is used extensively in the reliability analysis...Xth Plenary Assembly, Geneva, 1963, published by International Telecomunications Union. S. Braff, R., Computer program to calculate a Karkov Chain Reliability Model, unpublished york, MITRE Corporation. A-1 I.° , 44J Ili *Y 0E 00 ...F i8 1110 Prelim inary Analysis of Program Engineering & LORAN’C System ReliabilityMaintenance Service i ~Washington. D.C.

  8. Reliable and energy-efficient communications for wireless biomedical implant systems.

    PubMed

    Ntouni, Georgia D; Lioumpas, Athanasios S; Nikita, Konstantina S

    2014-11-01

    Implant devices are used to measure biological parameters and transmit their results to remote off-body devices. As implants are characterized by strict requirements on size, reliability, and power consumption, applying the concept of cooperative communications to wireless body area networks offers several benefits. In this paper, we aim to minimize the power consumption of the implant device by utilizing on-body wearable devices, while providing the necessary reliability in terms of outage probability and bit error rate. Taking into account realistic power considerations and wireless propagation environments based on the IEEE P802.l5 channel model, an exact theoretical analysis is conducted for evaluating several communication scenarios with respect to the position of the wearable device and the motion of the human body. The derived closed-form expressions are employed toward minimizing the required transmission power, subject to a minimum quality-of-service requirement. In this way, the complexity and power consumption are transferred from the implant device to the on-body relay, which is an efficient approach since they can be easily replaced, in contrast to the in-body implants.

  9. A Z-number-based decision making procedure with ranking fuzzy numbers method

    NASA Astrophysics Data System (ADS)

    Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah

    2014-12-01

    The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.

  10. Comparative Analysis of the Reliability of Steel Structure with Pinned and Rigid Nodes Subjected to Fire

    NASA Astrophysics Data System (ADS)

    Kubicka, Katarzyna; Radoń, Urszula; Szaniec, Waldemar; Pawlak, Urszula

    2017-10-01

    The paper concerns the reliability analysis of steel structures subjected to high temperatures of fire gases. Two types of spatial structures were analysed, namely with pinned and rigid nodes. The fire analysis was carried out according to prescriptions of Eurocode. The static-strength analysis was conducted using the finite element method (FEM). The MES3D program, developed by Szaniec (Kielce University of Technology, Poland), was used for this purpose. The results received from MES3D made it possible to carry out the reliability analysis using the Numpress Explore program that was developed at the Institute of Fundamental Technological Research of the Polish Academy of Sciences [9]. The measurement of reliability of structures is the Hasofer-Lind reliability index (β). The reliability analysis was carried out according to approximation (FORM, SORM) and simulation (Importance Sampling, Monte Carlo) methods. As the fire progresses, the value of reliability index decreases. The analysis conducted for the study made it possible to evaluate the impact of node types on those changes. In real structures, it is often difficult to define correctly types of nodes, so some simplifications are made. The presented analysis contributes to the recognition of consequences of such assumptions for the safety of structures, subjected to fire.

  11. Thermal Protection for Mars Sample Return Earth Entry Vehicle: A Grand Challenge for Design Methodology and Reliability Verification

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Gage, Peter; Wright, Michael J.

    2017-01-01

    Mars Sample Return is our Grand Challenge for the coming decade. TPS (Thermal Protection System) nominal performance is not the key challenge. The main difficulty for designers is the need to verify unprecedented reliability for the entry system: current guidelines for prevention of backward contamination require that the probability of spores larger than 1 micron diameter escaping into the Earth environment be lower than 1 million for the entire system, and the allocation to TPS would be more stringent than that. For reference, the reliability allocation for Orion TPS is closer to 11000, and the demonstrated reliability for previous human Earth return systems was closer to 1100. Improving reliability by more than 3 orders of magnitude is a grand challenge indeed. The TPS community must embrace the possibility of new architectures that are focused on reliability above thermal performance and mass efficiency. MSR (Mars Sample Return) EEV (Earth Entry Vehicle) will be hit with MMOD (Micrometeoroid and Orbital Debris) prior to reentry. A chute-less aero-shell design which allows for self-righting shape was baselined in prior MSR studies, with the assumption that a passive system will maximize EEV robustness. Hence the aero-shell along with the TPS has to take ground impact and not break apart. System verification will require testing to establish ablative performance and thermal failure but also testing of damage from MMOD, and structural performance at ground impact. Mission requirements will demand analysis, testing and verification that are focused on establishing reliability of the design. In this proposed talk, we will focus on the grand challenge of MSR EEV TPS and the need for innovative approaches to address challenges in modeling, testing, manufacturing and verification.

  12. Comparative measurement of collagen bundle orientation by Fourier analysis and semiquantitative evaluation: reliability and agreement in Masson's trichrome, Picrosirius red and confocal microscopy techniques.

    PubMed

    Marcos-Garcés, V; Harvat, M; Molina Aguilar, P; Ferrández Izquierdo, A; Ruiz-Saurí, A

    2017-08-01

    Measurement of collagen bundle orientation in histopathological samples is a widely used and useful technique in many research and clinical scenarios. Fourier analysis is the preferred method for performing this measurement, but the most appropriate staining and microscopy technique remains unclear. Some authors advocate the use of Haematoxylin-Eosin (H&E) and confocal microscopy, but there are no studies comparing this technique with other classical collagen stainings. In our study, 46 human skin samples were collected, processed for histological analysis and stained with Masson's trichrome, Picrosirius red and H&E. Five microphotographs of the reticular dermis were taken with a 200× magnification with light microscopy, polarized microscopy and confocal microscopy, respectively. Two independent observers measured collagen bundle orientation with semiautomated Fourier analysis with the Image-Pro Plus 7.0 software and three independent observers performed a semiquantitative evaluation of the same parameter. The average orientation for each case was calculated with the values of the five pictures. We analyzed the interrater reliability, the consistency between Fourier analysis and average semiquantitative evaluation and the consistency between measurements in Masson's trichrome, Picrosirius red and H&E-confocal. Statistical analysis for reliability and agreement was performed with the SPSS 22.0 software and consisted of intraclass correlation coefficient (ICC), Bland-Altman plots and limits of agreement and coefficient of variation. Interrater reliability was almost perfect (ICC > 0.8) with all three histological and microscopy techniques and always superior in Fourier analysis than in average semiquantitative evaluation. Measurements were consistent between Fourier analysis by one observer and average semiquantitative evaluation by three observers, with an almost perfect agreement with Masson's trichrome and Picrosirius red techniques (ICC > 0.8) and a strong agreement with H&E-confocal (0.7 < ICC < 0.8). Comparison of measurements between the three techniques for the same observer showed an almost perfect agreement (ICC > 0.8), better with Fourier analysis than with semiquantitative evaluation (single and average). These results in nonpathological skin samples were also confirmed in a preliminary analysis in eight scleroderma skin samples. Our results show that Masson's trichrome and Picrosirius red are consistent with H&E-confocal for measuring collagen bundle orientation in histological samples and could thus be used indistinctly for this purpose. Fourier analysis is superior to average semiquantitative evaluation and should keep being used as the preferred method. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  13. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  14. Modeling reality

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.

  15. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  16. Analysis of human plasma lipids by using comprehensive two-dimensional gas chromatography with dual detection and with the support of high-resolution time-of-flight mass spectrometry for structural elucidation.

    PubMed

    Salivo, Simona; Beccaria, Marco; Sullini, Giuseppe; Tranchida, Peter Q; Dugo, Paola; Mondello, Luigi

    2015-01-01

    The main focus of the present research is the analysis of the unsaponifiable lipid fraction of human plasma by using data derived from comprehensive two-dimensional gas chromatography with dual quadrupole mass spectrometry and flame ionization detection. This approach enabled us to attain both mass spectral information and analyte percentage data. Furthermore, gas chromatography coupled with high-resolution time-of-flight mass spectrometry was used to increase the reliability of identification of several unsaponifiable lipid constituents. The synergism between both the high-resolution gas chromatography and mass spectrometry processes enabled us to attain a more in-depth knowledge of the unsaponifiable fraction of human plasma. Additionally, information was attained on the fatty acid and triacylglycerol composition of the plasma samples, subjected to investigation by using comprehensive two-dimensional gas chromatography with dual quadrupole mass spectrometry and flame ionization detection and high-performance liquid chromatography with atmospheric pressure chemical ionization quadrupole mass spectrometry, respectively. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A new use for long-term frozen brain tissue: Golgi impregnation

    PubMed Central

    Melendez-Ferro, Miguel; Perez-Costas, Emma; Roberts, Rosalinda C.

    2009-01-01

    The study of dendritic spine shape and number has become a standard in the analysis of synaptic transmission anomalies since a considerable number of neuropsychiatric and neurological diseases have their foundation in alterations in these structures. One of the best ways to study possible alterations of dendritic spines is the use of Golgi impregnation. Although usually the Golgi method implies the use of fresh or fixed tissue, here we report the use of Golgi-Cox for the staining of human and animal brain tissue kept frozen for long periods of time. We successfully applied the Golgi-Cox method to human brain tissue stored for up to 15 years in a freezer. The technique produced reliable and reproducible impregnation of dendrites and dendritic spines in different cortical areas. We also applied the same technique to rat brain frozen for up to one year, obtaining the same satisfactory results. The fact that Golgi-Cox can be successfully applied to this type of tissue adds a new value for hundreds of frozen human or animal brains kept in the freezers of the laboratories, that otherwise would not be useful for anything else. Researchers other than neuroanatomists, i.e. in fields such as biochemistry and molecular biology can also benefit from a simple and reliable technique that can be applied to tissue left from their primary experiments. PMID:18789970

  18. Implementation of a Personnel Reliability Program as a Facilitator of Biosafety and Biosecurity Culture in BSL-3 and BSL-4 Laboratories

    PubMed Central

    Weaver, Patrick; Fitch, J. Patrick; Johnson, Barbara; Pearl, R. Marene

    2013-01-01

    In late 2010, the National Biodefense Analysis and Countermeasures Center (NBACC) implemented a Personnel Reliability Program (PRP) with the goal of enabling active participation by its staff to drive and improve the biosafety and biosecurity culture at the organization. A philosophical keystone for accomplishment of NBACC's scientific mission is simultaneous excellence in operations and outreach. Its personnel reliability program builds on this approach to: (1) enable and support a culture of responsibility based on human performance principles, (2) maintain compliance with regulations, and (3) address the risk associated with the insider threat. Recently, the Code of Federal Regulations (CFR) governing use and possession of biological select agents and toxins (BSAT) was amended to require a pre-access suitability assessment and ongoing evaluation for staff accessing Tier 1 BSAT. These 2 new requirements are in addition to the already required Federal Bureau of Investigation (FBI) Security Risk Assessment (SRA). Two years prior to the release of these guidelines, NBACC developed its PRP to supplement the SRA requirement as a means to empower personnel and foster an operational environment where any and all work with BSAT is conducted in a safe, secure, and reliable manner. PMID:23745523

  19. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Forecasting infectious disease emergence subject to seasonal forcing.

    PubMed

    Miller, Paige B; O'Dea, Eamon B; Rohani, Pejman; Drake, John M

    2017-09-06

    Despite high vaccination coverage, many childhood infections pose a growing threat to human populations. Accurate disease forecasting would be of tremendous value to public health. Forecasting disease emergence using early warning signals (EWS) is possible in non-seasonal models of infectious diseases. Here, we assessed whether EWS also anticipate disease emergence in seasonal models. We simulated the dynamics of an immunizing infectious pathogen approaching the tipping point to disease endemicity. To explore the effect of seasonality on the reliability of early warning statistics, we varied the amplitude of fluctuations around the average transmission. We proposed and analyzed two new early warning signals based on the wavelet spectrum. We measured the reliability of the early warning signals depending on the strength of their trend preceding the tipping point and then calculated the Area Under the Curve (AUC) statistic. Early warning signals were reliable when disease transmission was subject to seasonal forcing. Wavelet-based early warning signals were as reliable as other conventional early warning signals. We found that removing seasonal trends, prior to analysis, did not improve early warning statistics uniformly. Early warning signals anticipate the onset of critical transitions for infectious diseases which are subject to seasonal forcing. Wavelet-based early warning statistics can also be used to forecast infectious disease.

  1. Reducing the Risk of Human Space Missions with INTEGRITY

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Dillon-Merill, Robin L.; Tri, Terry O.; Henninger, Donald L.

    2003-01-01

    The INTEGRITY Program will design and operate a test bed facility to help prepare for future beyond-LEO missions. The purpose of INTEGRITY is to enable future missions by developing, testing, and demonstrating advanced human space systems. INTEGRITY will also implement and validate advanced management techniques including risk analysis and mitigation. One important way INTEGRITY will help enable future missions is by reducing their risk. A risk analysis of human space missions is important in defining the steps that INTEGRITY should take to mitigate risk. This paper describes how a Probabilistic Risk Assessment (PRA) of human space missions will help support the planning and development of INTEGRITY to maximize its benefits to future missions. PRA is a systematic methodology to decompose the system into subsystems and components, to quantify the failure risk as a function of the design elements and their corresponding probability of failure. PRA provides a quantitative estimate of the probability of failure of the system, including an assessment and display of the degree of uncertainty surrounding the probability. PRA provides a basis for understanding the impacts of decisions that affect safety, reliability, performance, and cost. Risks with both high probability and high impact are identified as top priority. The PRA of human missions beyond Earth orbit will help indicate how the risk of future human space missions can be reduced by integrating and testing systems in INTEGRITY.

  2. Meta-Analysis of Scale Reliability Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2013-01-01

    A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…

  3. The Impact Analysis of Psychological Reliability of Population Pilot Study for Selection of Particular Reliable Multi-Choice Item Test in Foreign Language Research Work

    ERIC Educational Resources Information Center

    Fazeli, Seyed Hossein

    2010-01-01

    The purpose of research described in the current study is the psychological reliability, its importance, application, and more to investigate on the impact analysis of psychological reliability of population pilot study for selection of particular reliable multi-choice item test in foreign language research work. The population for subject…

  4. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis

    NASA Astrophysics Data System (ADS)

    Gallego, Sandra F.; Højlund, Kurt; Ejsing, Christer S.

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MSALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. [Figure not available: see fulltext.

  5. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis.

    PubMed

    Gallego, Sandra F; Højlund, Kurt; Ejsing, Christer S

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MS ALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. Graphical Abstract ᅟ.

  6. Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.

    PubMed

    Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred

    2017-01-09

    A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.

  7. A population MRI brain template and analysis tools for the macaque.

    PubMed

    Seidlitz, Jakob; Sponheim, Caleb; Glen, Daniel; Ye, Frank Q; Saleem, Kadharbatcha S; Leopold, David A; Ungerleider, Leslie; Messinger, Adam

    2018-04-15

    The use of standard anatomical templates is common in human neuroimaging, as it facilitates data analysis and comparison across subjects and studies. For non-human primates, previous in vivo templates have lacked sufficient contrast to reliably validate known anatomical brain regions and have not provided tools for automated single-subject processing. Here we present the "National Institute of Mental Health Macaque Template", or NMT for short. The NMT is a high-resolution in vivo MRI template of the average macaque brain generated from 31 subjects, as well as a neuroimaging tool for improved data analysis and visualization. From the NMT volume, we generated maps of tissue segmentation and cortical thickness. Surface reconstructions and transformations to previously published digital brain atlases are also provided. We further provide an analysis pipeline using the NMT that automates and standardizes the time-consuming processes of brain extraction, tissue segmentation, and morphometric feature estimation for anatomical scans of individual subjects. The NMT and associated tools thus provide a common platform for precise single-subject data analysis and for characterizations of neuroimaging results across subjects and studies. Copyright © 2017 ElsevierCompany. All rights reserved.

  8. A Melting Curve-Based Multiplex RT-qPCR Assay for Simultaneous Detection of Four Human Coronaviruses

    PubMed Central

    Wan, Zhenzhou; Zhang, Ya’nan; He, Zhixiang; Liu, Jia; Lan, Ke; Hu, Yihong; Zhang, Chiyu

    2016-01-01

    Human coronaviruses HCoV-OC43, HCoV-229E, HCoV-NL63 and HCoV-HKU1 are common respiratory viruses associated with acute respiratory infection. They have a global distribution. Rapid and accurate diagnosis of HCoV infection is important for the management and treatment of hospitalized patients with HCoV infection. Here, we developed a melting curve-based multiplex RT-qPCR assay for simultaneous detection of the four HCoVs. In the assay, SYTO 9 was used to replace SYBR Green I as the fluorescent dye, and GC-modified primers were designed to improve the melting temperature (Tm) of the specific amplicon. The four HCoVs were clearly distinguished by characteristic melting peaks in melting curve analysis. The detection sensitivity of the assay was 3 × 102 copies for HCoV-OC43, and 3 × 101 copies for HCoV-NL63, HCoV-229E and HCoV-HKU1 per 30 μL reaction. Clinical evaluation and sequencing confirmation demonstrated that the assay was specific and reliable. The assay represents a sensitive and reliable method for diagnosis of HCoV infection in clinical samples. PMID:27886052

  9. A Melting Curve-Based Multiplex RT-qPCR Assay for Simultaneous Detection of Four Human Coronaviruses.

    PubMed

    Wan, Zhenzhou; Zhang, Ya'nan; He, Zhixiang; Liu, Jia; Lan, Ke; Hu, Yihong; Zhang, Chiyu

    2016-11-23

    Human coronaviruses HCoV-OC43, HCoV-229E, HCoV-NL63 and HCoV-HKU1 are common respiratory viruses associated with acute respiratory infection. They have a global distribution. Rapid and accurate diagnosis of HCoV infection is important for the management and treatment of hospitalized patients with HCoV infection. Here, we developed a melting curve-based multiplex RT-qPCR assay for simultaneous detection of the four HCoVs. In the assay, SYTO 9 was used to replace SYBR Green I as the fluorescent dye, and GC-modified primers were designed to improve the melting temperature (Tm) of the specific amplicon. The four HCoVs were clearly distinguished by characteristic melting peaks in melting curve analysis. The detection sensitivity of the assay was 3 × 10² copies for HCoV-OC43, and 3 × 10¹ copies for HCoV-NL63, HCoV-229E and HCoV-HKU1 per 30 μL reaction. Clinical evaluation and sequencing confirmation demonstrated that the assay was specific and reliable. The assay represents a sensitive and reliable method for diagnosis of HCoV infection in clinical samples.

  10. Non-Traditional Displays for Mission Monitoring

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Schutte, Paul C.

    1999-01-01

    Advances in automation capability and reliability have changed the role of humans from operating and controlling processes to simply monitoring them for anomalies. However, humans are traditionally bad monitors of highly reliable systems over time. Thus, the human is assigned a task for which he is ill equipped. We believe that this has led to the dominance of human error in process control activities such as operating transportation systems (aircraft and trains), monitoring patient health in the medical industry, and controlling plant operations. Research has shown, though, that an automated monitor can assist humans in recognizing and dealing with failures. One possible solution to this predicament is to use a polar-star display that will show deviations from normal states based on parameters that are most indicative of mission health.

  11. Integrated Human-in-the-Loop Ground Testing - Value, History, and the Future

    NASA Technical Reports Server (NTRS)

    Henninger, Donald L.

    2016-01-01

    Systems for very long-duration human missions to Mars will be designed to operate reliably for many years and many of these systems will never be returned to Earth. The need for high reliability is driven by the requirement for safe functioning of remote, long-duration crewed systems and also by unsympathetic abort scenarios. Abort from a Mars mission could be as long as 450 days to return to Earth. The key to developing a human-in-the-loop architecture is a development process that allows for a logical sequence of validating successful development in a stepwise manner, with assessment of key performance parameters (KPPs) at each step; especially important are KPPs for technologies evaluated in a full systems context with human crews on Earth and on space platforms such as the ISS. This presentation will explore the implications of such an approach to technology development and validation including the roles of ground and space-based testing necessary to develop a highly reliable system for long duration human exploration missions. Historical development and systems testing from Mercury to the International Space Station (ISS) to ground testing will be reviewed. Current work as well as recommendations for future work will be described.

  12. A Direct Aqueous Derivatization GSMS Method for Determining Benzoylecgonine Concentrations in Human Urine.

    PubMed

    Chericoni, Silvio; Stefanelli, Fabio; Da Valle, Ylenia; Giusiani, Mario

    2015-09-01

    A sensitive and reliable method for extraction and quantification of benzoylecgonine (BZE) and cocaine (COC) in urine is presented. Propyl-chloroformate was used as derivatizing agent, and it was directly added to the urine sample: the propyl derivative and COC were then recovered by liquid-liquid extraction procedure. Gas chromatography-mass spectrometry was used to detect the analytes in selected ion monitoring mode. The method proved to be precise for BZE and COC both in term of intraday and interday analysis, with a coefficient of variation (CV)<6%. Limits of detection (LOD) were 2.7 ng/mL for BZE and 1.4 ng/mL for COC. The calibration curve showed a linear relationship for BZE and COC (r2>0.999 and >0.997, respectively) within the range investigated. The method, applied to thirty authentic samples, showed to be very simple, fast, and reliable, so it can be easily applied in routine analysis for the quantification of BZE and COC in urine samples. © 2015 American Academy of Forensic Sciences.

  13. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  14. Influence of the measuring condition on vibrocardiographic signals acquired on the thorax with a laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Mignanelli, L.; Bauer, G.; Klarmann, M.; Wang, H.; Rembe, C.

    2017-07-01

    Velocity signals acquired with a Laser Doppler Vibrometer on the thorax (Optical Vibrocardiography) contain important information, which have a relation to cardiovascular parameters and cardiovascular diseases. The acquired signal results in a superimposition of vibrations originated from different sources of the human body. Since we study the vibration generated by the heart to reliably detect a characteristic time interval corresponding to the PR interval in the ECG, these disturbance have to be removed by filtering. Moreover, the Laser Doppler Vibrometer measures only in the direction of the laser beam and, thus, the velocity signal is only a projection of the tridimensional movement of the thorax. This work presents an analysis of the influences of the filters and of the measurement direction on the characteristic time interval in Vibrocardiographic signals. Our analysis results in recommended settings for filters and we demonstrate that reliable detection of vibrocardiographic parameters is possible within an angle deviation of 30° in respect to the perpendicular irradiation on the front side of the subject.

  15. Ga + TOF-SIMS lineshape analysis for resolution enhancement of MALDI MS spectra of a peptide mixture

    NASA Astrophysics Data System (ADS)

    Malyarenko, D. I.; Chen, H.; Wilkerson, A. L.; Tracy, E. R.; Cooke, W. E.; Manos, D. M.; Sasinowski, M.; Semmes, O. J.

    2004-06-01

    The use of mass spectrometry to obtain molecular profiles indicative of alteration of concentrations of peptides in body fluids is currently the subject of intense investigation. For surface-based time-of-flight mass spectrometry the reliability and specificity of such profiling methods depend both on the resolution of the measuring instrument and on the preparation of samples. The present work is a part of a program to use Ga + beam TOF-SIMS alone, and as an adjunct to MALDI, in the development of reliable protein and peptide markers for diseases. Here, we describe techniques to prepare samples of relatively high-mass peptides, which serve as calibration standards and proxies for biomarkers. These are: Arg8-vasopressin, human angiotensin II, and somatostatin. Their TOF-SIMS spectra show repeatable characteristic features, with mass resolution exceeding 2000, including parent peaks and chemical adducts. The lineshape analysis for high-resolution parent peaks is shown to be useful for filter construction and deconvolution of inferior resolution SELDI-TOF spectra of calibration peptide mixture.

  16. Clinical Trials for Predictive Medicine—New Challenges and Paradigms*

    PubMed Central

    Simon, Richard

    2014-01-01

    Background Developments in biotechnology and genomics have increased the focus of biostatisticians on prediction problems. This has led to many exciting developments for predictive modeling where the number of variables is larger than the number of cases. Heterogeneity of human diseases and new technology for characterizing them presents new opportunities and challenges for the design and analysis of clinical trials. Purpose In oncology, treatment of broad populations with regimens that do not benefit most patients is less economically sustainable with expensive molecularly targeted therapeutics. The established molecular heterogeneity of human diseases requires the development of new paradigms for the design and analysis of randomized clinical trials as a reliable basis for predictive medicine[1, 2]. Results We have reviewed prospective designs for the development of new therapeutics with candidate predictive biomarkers. We have also outlined a prediction based approach to the analysis of randomized clinical trials that both preserves the type I error and provides a reliable internally validated basis for predicting which patients are most likely or unlikely to benefit from the new regimen. Conclusions Developing new treatments with predictive biomarkers for identifying the patients who are most likely or least likely to benefit makes drug development more complex. But for many new oncology drugs it is the only science based approach and should increase the chance of success. It may also lead to more consistency in results among trials and has obvious benefits for reducing the number of patients who ultimately receive expensive drugs which expose them risks of adverse events but no benefit. This approach also has great potential value for controlling societal expenditures on health care. Development of treatments with predictive biomarkers requires major changes in the standard paradigms for the design and analysis of clinical trials. Some of the key assumptions upon which current methods are based are no longer valid. In addition to reviewing a variety of new clinical trial designs for co-development of treatments and predictive biomarkers, we have outlined a prediction based approach to the analysis of randomized clinical trials. This is a very structured approach whose use requires careful prospective planning. It requires further development but may serve as a basis for a new generation of predictive clinical trials which provide the kinds of reliable individualized information which physicians and patients have long sought, but which have not been available from the past use of post-hoc subset analysis. PMID:20338899

  17. Human Research Program Opportunities

    NASA Technical Reports Server (NTRS)

    Kundrot, Craig E.

    2014-01-01

    The goal of HRP is to provide human health and performance countermeasures, knowledge, technologies, and tools to enable safe, reliable, and productive human space exploration. The Human Research Program was designed to meet the needs of human space exploration, and understand and reduce the risk to crew health and performance in exploration missions.

  18. 16 CFR 1500.4 - Human experience with hazardous substances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Human experience with hazardous substances... § 1500.4 Human experience with hazardous substances. (a) Reliable data on human experience with any..., the human experience takes precedence. (b) Experience may show that an article is more or less toxic...

  19. 16 CFR 1500.4 - Human experience with hazardous substances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Human experience with hazardous substances... § 1500.4 Human experience with hazardous substances. (a) Reliable data on human experience with any..., the human experience takes precedence. (b) Experience may show that an article is more or less toxic...

  20. 16 CFR 1500.4 - Human experience with hazardous substances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Human experience with hazardous substances... § 1500.4 Human experience with hazardous substances. (a) Reliable data on human experience with any..., the human experience takes precedence. (b) Experience may show that an article is more or less toxic...

  1. 16 CFR 1500.4 - Human experience with hazardous substances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Human experience with hazardous substances... § 1500.4 Human experience with hazardous substances. (a) Reliable data on human experience with any..., the human experience takes precedence. (b) Experience may show that an article is more or less toxic...

  2. Development of a morphology-based modeling technique for tracking solid-body displacements: examining the reliability of a potential MRI-only approach for joint kinematics assessment.

    PubMed

    Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian

    2016-05-18

    Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4.31 % and 5.26 % for the two pulse sequences, respectively, while the ICCs were 0.99 for both. For rotation measures, the CVs were 3.19 % and 2.44 % for the two pulse sequences with the ICCs being 0.98 and 0.97, respectively. A novel biplanar imaging approach also yielded high reliability with mean CVs of 2.66 % and 3.39 % for translation in the x- and z-planes, respectively, and ICCs of 0.97 in both planes. This work provides basic proof-of-concept for a reliable marker-less non-ionizing-radiation-based quasi-dynamic motion quantification technique that can potentially be developed into a tool for real-time joint kinematics analysis.

  3. Factors which Limit the Value of Additional Redundancy in Human Rated Launch Vehicle Systems

    NASA Technical Reports Server (NTRS)

    Anderson, Joel M.; Stott, James E.; Ring, Robert W.; Hatfield, Spencer; Kaltz, Gregory M.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has embarked on an ambitious program to return humans to the moon and beyond. As NASA moves forward in the development and design of new launch vehicles for future space exploration, it must fully consider the implications that rule-based requirements of redundancy or fault tolerance have on system reliability/risk. These considerations include common cause failure, increased system complexity, combined serial and parallel configurations, and the impact of design features implemented to control premature activation. These factors and others must be considered in trade studies to support design decisions that balance safety, reliability, performance and system complexity to achieve a relatively simple, operable system that provides the safest and most reliable system within the specified performance requirements. This paper describes conditions under which additional functional redundancy can impede improved system reliability. Examples from current NASA programs including the Ares I Upper Stage will be shown.

  4. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  5. Analysis of 6-mercaptopurine in human plasma with a high-performance liquid chromatographic method including post-column derivatization and fluorimetric detection.

    PubMed

    Jonkers, R E; Oosterhuis, B; ten Berge, R J; van Boxtel, C J

    1982-12-10

    A relatively simple assay with improved reliability and sensitivity for measuring levels of 6-mercaptopurine in human plasma is presented. After extraction of the compound and the added internal standard with phenyl mercury acetate, samples were separated by ion-pair reversed-phase high-performance liquid chromatography. On-line the analytes were oxidized to fluorescent products and detected in a flow-fluorimeter. The within-day coefficient of variation was 3.8% at a concentration of 25 ng/ml. The lower detection limit was 2 ng/ml when 1.0 ml of plasma was used. Mercaptopurine concentration versus time curves of two subjects after a single oral dose of azathioprine are shown.

  6. A mathematical model of diurnal variations in human plasma melatonin levels

    NASA Technical Reports Server (NTRS)

    Brown, E. N.; Choe, Y.; Shanahan, T. L.; Czeisler, C. A.

    1997-01-01

    Studies in animals and humans suggest that the diurnal pattern in plasma melatonin levels is due to the hormone's rates of synthesis, circulatory infusion and clearance, circadian control of synthesis onset and offset, environmental lighting conditions, and error in the melatonin immunoassay. A two-dimensional linear differential equation model of the hormone is formulated and is used to analyze plasma melatonin levels in 18 normal healthy male subjects during a constant routine. Recently developed Bayesian statistical procedures are used to incorporate correctly the magnitude of the immunoassay error into the analysis. The estimated parameters [median (range)] were clearance half-life of 23.67 (14.79-59.93) min, synthesis onset time of 2206 (1940-0029), synthesis offset time of 0621 (0246-0817), and maximum N-acetyltransferase activity of 7.17(2.34-17.93) pmol x l(-1) x min(-1). All were in good agreement with values from previous reports. The difference between synthesis offset time and the phase of the core temperature minimum was 1 h 15 min (-4 h 38 min-2 h 43 min). The correlation between synthesis onset and the dim light melatonin onset was 0.93. Our model provides a more physiologically plausible estimate of the melatonin synthesis onset time than that given by the dim light melatonin onset and the first reliable means of estimating the phase of synthesis offset. Our analysis shows that the circadian and pharmacokinetics parameters of melatonin can be reliably estimated from a single model.

  7. Validation of the World Health Organization tool for situational analysis to assess emergency and essential surgical care at district hospitals in Ghana.

    PubMed

    Osen, Hayley; Chang, David; Choo, Shelly; Perry, Henry; Hesse, Afua; Abantanga, Francis; McCord, Colin; Chrouser, Kristin; Abdullah, Fizan

    2011-03-01

    The World Health Organization (WHO) Tool for Situational Analysis to Assess Emergency and Essential Surgical Care (hereafter called the WHO Tool) has been used in more than 25 countries and is the largest effort to assess surgical care in the world. However, it has not yet been independently validated. Test-retest reliability is one way to validate the degree to which tests instruments are free from random error. The aim of the present field study was to determine the test-retest reliability of the WHO Tool. The WHO Tool was mailed to 10 district hospitals in Ghana. Written instructions were provided along with a letter from the Ghana Health Services requesting the hospital administrator to complete the survey tool. After ensuring delivery and completion of the forms, the study team readministered the WHO Tool at the time of an on-site visit less than 1 month later. The results of the two tests were compared to calculate kappa statistics for each of the 152 questions in the WHO Tool. The kappa statistic is a statistical measure of the degree of agreement above what would be expected based on chance alone. Ten hospitals were surveyed twice over a short interval (i.e., less than 1 month). Weighted and unweighted kappa statistics were calculated for 152 questions. The median unweighted kappa for the entire survey was 0.43 (interquartile range 0-0.84). The infrastructure section (24 questions) had a median kappa of 0.81; the human resources section (13 questions) had a median kappa of 0.77; the surgical procedures section (67 questions) had a median kappa of 0.00; and the emergency surgical equipment section (48 questions) had a median kappa of 0.81. Hospital capacity survey questions related to infrastructure characteristics had high reliability. However, questions related to process of care had poor reliability and may benefit from supplemental data gathered by direct observation. Limitations to the study include the small sample size: 10 district hospitals in a single country. Consistent and high correlations calculated from the field testing within the present analysis suggest that the WHO Tool for Situational Analysis is a reliable tool where it measures structure and setting, but it should be revised for measuring process of care.

  8. The properties of human body phantoms used in calculations of electromagnetic fields exposure by wireless communication handsets or hand-operated industrial devices.

    PubMed

    Zradziński, Patryk

    2013-06-01

    According to international guidelines, the assessment of biophysical effects of exposure to electromagnetic fields (EMF) generated by hand-operated sources needs the evaluation of induced electric field (E(in)) or specific energy absorption rate (SAR) caused by EMF inside a worker's body and is usually done by the numerical simulations with different protocols applied to these two exposure cases. The crucial element of these simulations is the numerical phantom of the human body. Procedures of E(in) and SAR evaluation due to compliance analysis with exposure limits have been defined in Institute of Electrical and Electronics Engineers standards and International Commission on Non-Ionizing Radiation Protection guidelines, but a detailed specification of human body phantoms has not been described. An analysis of the properties of over 30 human body numerical phantoms was performed which has been used in recently published investigations related to the assessment of EMF exposure by various sources. The differences in applicability of these phantoms in the evaluation of E(in) and SAR while operating industrial devices and SAR while using mobile communication handsets are discussed. The whole human body numerical phantom dimensions, posture, spatial resolution and electric contact with the ground constitute the key parameters in modeling the exposure related to industrial devices, while modeling the exposure from mobile communication handsets, which needs only to represent the exposed part of the human body nearest to the handset, mainly depends on spatial resolution of the phantom. The specification and standardization of these parameters of numerical human body phantoms are key requirements to achieve comparable and reliable results from numerical simulations carried out for compliance analysis against exposure limits or within the exposure assessment in EMF-related epidemiological studies.

  9. Network challenges for cyber physical systems with tiny wireless devices: a case study on reliable pipeline condition monitoring.

    PubMed

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Khan, Muhammad Farhan; Naeem, Muhammad; Anpalagan, Alagan

    2015-03-25

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed.

  10. Network Challenges for Cyber Physical Systems with Tiny Wireless Devices: A Case Study on Reliable Pipeline Condition Monitoring

    PubMed Central

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Farhan Khan, Muhammad; Naeem, Muhammad; Anpalagan, Alagan

    2015-01-01

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed. PMID:25815444

  11. A HUMAN RELIABILITY-CENTERED APPROACH TO THE DEVELOPMENT OF JOB AIDS FOR REVIEWERS OF MEDICAL DEVICES THAT USE RADIOLOGICAL BYPRODUCT MATERIALS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    COOPER, S.E.; BROWN, W.S.; WREATHALL, J.

    2005-02-02

    The U.S. Nuclear Regulatory Commission (NRC) is engaged in an initiative to risk-inform the regulation of byproduct materials. Operating experience indicates that human actions play a dominant role in most of the activities involving byproduct materials, which are radioactive materials other than those used in nuclear power plants or in weapons production, primarily for medical or industrial purposes. The overall risk of these activities is strongly influenced by human performance. Hence, an improved understanding of human error, its causes and contexts, and human reliability analysis (HRA) is important in risk-informing the regulation of these activities. The development of the humanmore » performance job aids was undertaken by stages, with frequent interaction with the prospective users. First, potentially risk significant human actions were identified based on reviews of available risk studies for byproduct material applications and of descriptions of events for byproduct materials applications that involved potentially significant human actions. Applications from the medical and the industrial domains were sampled. Next, the specific needs of the expected users of the human performance-related capabilities were determined. To do this, NRC headquarters and region staff were interviewed to identify the types of activities (e.g., license reviews, inspections, event assessments) that need HRA support and the form in which such support might best be offered. Because the range of byproduct uses regulated by NRC is so broad, it was decided that initial development of knowledge and tools would be undertaken in the context of a specific use of byproduct material, which was selected in consultation with NRC staff. Based on needs of NRC staff and the human performance related characteristics of the context chosen, knowledge resources were then compiled to support consideration of human performance issues related to the regulation of byproduct materials. Finally, with information sources and an application context identified, a set of strawman job aids was developed, which was then presented to prospective users for critique and comment. Work is currently under way to develop training materials and refine the job aids in preparation for a pilot evaluation.« less

  12. The Importance of HRA in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri

    2010-01-01

    Human performance is critical to crew safety during space missions. Humans interact with hardware and software during ground processing, normal flight, and in response to events. Human interactions with hardware and software can cause Loss of Crew and/or Vehicle (LOCV) through improper actions, or may prevent LOCV through recovery and control actions. Humans have the ability to deal with complex situations and system interactions beyond the capability of machines. Human Reliability Analysis (HRA) is a method used to qualitatively and quantitatively assess the occurrence of human failures that affect availability and reliability of complex systems. Modeling human actions with their corresponding failure probabilities in a Probabilistic Risk Assessment (PRA) provides a more complete picture of system risks and risk contributions. A high-quality HRA can provide valuable information on potential areas for improvement, including training, procedures, human interfaces design, and the need for automation. Modeling human error has always been a challenge in part because performance data is not always readily available. For spaceflight, the challenge is amplified not only because of the small number of participants and limited amount of performance data available, but also due to the lack of definition of the unique factors influencing human performance in space. These factors, called performance shaping factors in HRA terminology, are used in HRA techniques to modify basic human error probabilities in order to capture the context of an analyzed task. Many of the human error modeling techniques were developed within the context of nuclear power plants and therefore the methodologies do not address spaceflight factors such as the effects of microgravity and longer duration missions. This presentation will describe the types of human error risks which have shown up as risk drivers in the Shuttle PRA which may be applicable to commercial space flight. As with other large PRAs of complex machines, human error in the Shuttle PRA proved to be an important contributor (12 percent) to LOCV. An existing HRA technique was adapted for use in the Shuttle PRA, but additional guidance and improvements are needed to make the HRA task in space-related PRAs easier and more accurate. Therefore, this presentation will also outline plans for expanding current HRA methodology to more explicitly cover spaceflight performance shaping factors.

  13. An LC-MS/MS method for rapid and sensitive high-throughput simultaneous determination of various protein kinase inhibitors in human plasma.

    PubMed

    Abdelhameed, Ali S; Attwa, Mohamed W; Kadi, Adnan A

    2017-02-01

    A reliable, high-throughput and sensitive LC-MS/MS procedure was developed and validated for the determination of five tyrosine kinase inhibitors in human plasma. Following their extraction from human plasma, samples were eluted on a RP Luna®-PFP 100 Å column using a mobile phase system composed of acetonitrile and 0.01 m ammonium formate in water (pH ~4.1) with a ratio of (50:50, v/v) flowing at 0.3 mL min -1 . The mass spectrometer was operating with electrospray ionization in the positive ion multiple reaction monitoring mode. The proposed methodology resulted in linear calibration plots with correlation coefficients values of r 2  = 0.9995-0.9999 from concentration ranges of 2.5-100 ng mL -1 for imatinib, 5.0-100 ng mL -1 for sorafenib, tofacitinib and afatinib, and 1.0-100 ng mL -1 for cabozantinib. The procedure was validated in terms of its specificity, limit of detection (0.32-1.71 ng mL -1 ), lower limit of quantification (0.97-5.07 ng mL -1 ), intra- and inter assay accuracy (-3.83 to +2.40%) and precision (<3.37%), matrix effect and recovery and stability. Our results demonstrated that the proposed method is highly reliable for routine quantification of the investigated tyrosine kinase inhibitors in human plasma and can be efficiently applied in the rapid and sensitive analysis of their clinical samples. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Measuring and Validating the Levels of Brain-Derived Neurotrophic Factor in Human Serum

    PubMed Central

    Naegelin, Yvonne; Dingsdale, Hayley; Säuberli, Katharina; Schädelin, Sabine; Kappos, Ludwig

    2018-01-01

    Brain-derived neurotrophic factor (BDNF) secreted by neurons is a significant component of synaptic plasticity. In humans, it is also present in blood platelets where it accumulates following its biosynthesis in megakaryocytes. BDNF levels are thus readily detectable in human serum and it has been abundantly speculated that they may somehow serve as an indicator of brain function. However, there is a great deal of uncertainty with regard to the range of BDNF levels that can be considered normal, how stable these values are over time and even whether BDNF levels can be reliably measured in serum. Using monoclonal antibodies and a sandwich ELISA, this study reports on BDNF levels in the serum of 259 volunteers with a mean value of 32.69 ± 8.33 ng/ml (SD). The mean value for the same cohort after 12 months was not significantly different (N = 226, 32.97 ± 8.36 ng/ml SD, p = 0.19). Power analysis of these values indicates that relatively large cohorts are necessary to identify significant differences, requiring a group size of 60 to detect a 20% change. The levels determined by ELISA could be validated by Western blot analyses using a BDNF monoclonal antibody. While no association was observed with gender, a weak, positive correlation was found with age. The overall conclusions are that BDNF levels can be reliably measured in human serum, that these levels are quite stable over one year, and that comparisons between two populations may only be meaningful if cohorts of sufficient sizes are assembled. PMID:29662942

  15. Design, Development, Testing, and Evaluation: Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard; Hobbs, Alan; OHara, John; Null, Cynthia

    2006-01-01

    While human-system interaction occurs in all phases of system development and operation, this chapter on Human Factors in the DDT&E for Reliable Spacecraft Systems is restricted to the elements that involve "direct contact" with spacecraft systems. Such interactions will encompass all phases of human activity during the design, fabrication, testing, operation, and maintenance phases of the spacecraft lifespan. This section will therefore consider practices that would accommodate and promote effective, safe, reliable, and robust human interaction with spacecraft systems. By restricting this chapter to what the team terms "direct contact" with the spacecraft, "remote" factors not directly involved in the development and operation of the vehicle, such as management and organizational issues, have been purposely excluded. However, the design of vehicle elements that enable and promote ground control activities such as monitoring, feedback, correction and reversal (override) of on-board human and automation process are considered as per NPR8705.2A, Section 3.3.

  16. [Study of the reliability in one dimensional size measurement with digital slit lamp microscope].

    PubMed

    Wang, Tao; Qi, Chaoxiu; Li, Qigen; Dong, Lijie; Yang, Jiezheng

    2010-11-01

    To study the reliability of digital slit lamp microscope as a tool for quantitative analysis in one dimensional size measurement. Three single-blinded observers acquired and repeatedly measured the images with a size of 4.00 mm and 10.00 mm on the vernier caliper, which simulatated the human eye pupil and cornea diameter under China-made digital slit lamp microscope in the objective magnification of 4 times, 10 times, 16 times, 25 times, 40 times and 4 times, 10 times, 16 times, respectively. The correctness and precision of measurement were compared. The images with 4 mm size were measured by three investigators and the average values were located between 3.98 to 4.06. For the images with 10.00 mm size, the average values fell within 10.00 ~ 10.04. Measurement results of 4.00 mm images showed, except A4, B25, C16 and C25, significant difference was noted between the measured value and the true value. Regarding measurement results of 10.00 mm iamges indicated, except A10, statistical significance was found between the measured value and the true value. In terms of comparing the results of the same size measured at different magnifications by the same investigator, except for investigators A's measurements of 10.00 mm dimension, the measurement results by all the remaining investigators presented statistical significance at different magnifications. Compared measurements of the same size with different magnifications, measurements of 4.00 mm in 4-fold magnification had no significant difference among the investigators', the remaining results were statistically significant. The coefficient of variation of all measurement results were less than 5%; as magnification increased, the coefficient of variation decreased. The measurement of digital slit lamp microscope in one-dimensional size has good reliability,and should be performed for reliability analysis before used for quantitative analysis to reduce systematic errors.

  17. Cultural competency assessment tool for hospitals: Evaluating hospitals’ adherence to the culturally and linguistically appropriate services standards

    PubMed Central

    Weech-Maldonado, Robert; Dreachslin, Janice L.; Brown, Julie; Pradhan, Rohit; Rubin, Kelly L.; Schiller, Cameron; Hays, Ron D.

    2016-01-01

    Background The U.S. national standards for culturally and linguistically appropriate services (CLAS) in health care provide guidelines on policies and practices aimed at developing culturally competent systems of care. The Cultural Competency Assessment Tool for Hospitals (CCATH) was developed as an organizational tool to assess adherence to the CLAS standards. Purposes First, we describe the development of the CCATH and estimate the reliability and validity of the CCATH measures. Second, we discuss the managerial implications of the CCATH as an organizational tool to assess cultural competency. Methodology/Approach We pilot tested an initial draft of the CCATH, revised it based on a focus group and cognitive interviews, and then administered it in a field test with a sample of California hospitals. The reliability and validity of the CCATH were evaluated using factor analysis, analysis of variance, and Cronbach’s alphas. Findings Exploratory and confirmatory factor analyses identified 12 CCATH composites: leadership and strategic planning, data collection on inpatient population, data collection on service area, performance management systems and quality improvement, human resources practices, diversity training, community representation, availability of interpreter services, interpreter services policies, quality of interpreter services, translation of written materials, and clinical cultural competency practices. All the CCATH scales had internal consistency reliability of .65 or above, and the reliability was .70 or above for 9 of the 12 scales. Analysis of variance results showed that not-for-profit hospitals have higher CCATH scores than for-profit hospitals in five CCATH scales and higher CCATH scores than government hospitals in two CCATH scales. Practice Implications The CCATH showed adequate psychometric properties. Managers and policy makers can use the CCATH as a tool to evaluate hospital performance in cultural competency and identify and target improvements in hospital policies and practices that undergird the provision of CLAS. PMID:21934511

  18. Efficacy of Sex Determination from Human Dental Pulp Tissue and its Reliability as a Tool in Forensic Dentistry.

    PubMed

    Khanna, Kaveri Surya

    2015-01-01

    Sex determination is one of the primary steps in forensics. Barr body can be used as a histological method for identification of sex as it is found to be specific to female somatic cells and rare in male cells. To demarcate human dental pulp as an important identification tool of sex in forensic odontology (FO) and to evaluate the time period till which sex can be determined from pulp tissue using three stains H and E, Feulgen, and acridine - orange under fluorescence so as. 90 pulp samples (45 males and 45 females) were subjected to Barr body analysis for determination of sex using light and fluorescent microscopy. Barr body was found to be positive for female samples and negative or rare in the male sample (<3%). Barr body from human dental pulp tissue can be used as a successful determinant of sex identification in FO.

  19. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  20. Investigating the Reliability and Factor Structure of Kalichman's "Survey 2: Research Misconduct" Questionnaire: A Post Hoc Analysis Among Biomedical Doctoral Students in Scandinavia.

    PubMed

    Holm, Søren; Hofmann, Bjørn

    2017-10-01

    A precondition for reducing scientific misconduct is evidence about scientists' attitudes. We need reliable survey instruments, and this study investigates the reliability of Kalichman's "Survey 2: research misconduct" questionnaire. The study is a post hoc analysis of data from three surveys among biomedical doctoral students in Scandinavia (2010-2015). We perform reliability analysis, and exploratory and confirmatory factor analysis using a split-sample design as a partial validation. The results indicate that a reliable 13-item scale can be formed (Cronbach's α = .705), and factor analysis indicates that there are four reliable subscales each tapping a different construct: (a) general attitude to misconduct (α = .768), (b) attitude to personal misconduct (α = .784), (c) attitude to whistleblowing (α = .841), and (d) attitude to blameworthiness/punishment (α = .877). A full validation of the questionnaire requires further research. We, nevertheless, hope that the results will facilitate the increased use of the questionnaire in research.

  1. School-age children's fears, anxiety, and human figure drawings.

    PubMed

    Carroll, M K; Ryan-Wenger, N A

    1999-01-01

    The purpose of this study was to identify the fears of school-age children and determine the relationship between fear and anxiety. A descriptive, correlational, secondary analysis study was conducted using a convenience sample of 90 children between the ages of 8 and 12 years. Each child was instructed to complete the Revised Children's Anxiety Scale and then answer questions from a structured interview. On completion, each child was instructed to draw a human figure drawing. Frequency charts and correlational statistics were used to analyze the data. Findings indicated that the most significant fears of the boys were in the categories of animals, safety, school, and supernatural phenomena, whereas girls were more fearful of natural phenomena. High correlations existed between anxiety scores and the number of fears and emotional indicators on human figure drawings. Because human figure drawings are reliable tools for assessing anxiety and fears in children, practitioners should incorporate these drawings as part of their routine assessments of fearful children.

  2. Multiplex PCR for Differential Identification of Broad Tapeworms (Cestoda: Diphyllobothrium) Infecting Humans▿

    PubMed Central

    Wicht, Barbara; Yanagida, Tetsuya; Scholz, Tomáš; Ito, Akira; Jiménez, Juan A.; Brabec, Jan

    2010-01-01

    The specific identification of broad tapeworms (genus Diphyllobothrium) infecting humans is very difficult to perform by morphological observation. Molecular analysis by PCR and sequencing represents the only reliable tool to date to identify these parasites to the species level. Due to the recent spread of human diphyllobothriosis in several countries, a correct diagnosis has become crucial to better understand the distribution and the life cycle of human-infecting species as well as to prevent the introduction of parasites to disease-free water systems. Nevertheless, PCR and sequencing, although highly precise, are too complicated, long, and expensive to be employed in medical laboratories for routine diagnostics. In the present study we optimized a cheap and rapid molecular test for the differential identification of the most common Diphyllobothrium species infecting humans (D. latum, D. dendriticum, D. nihonkaiense, and D. pacificum), based on a multiplex PCR with the cytochrome c oxidase subunit 1 gene of mitochondrial DNA. PMID:20592146

  3. Fear of darkness, the full moon and the nocturnal ecology of African lions.

    PubMed

    Packer, Craig; Swanson, Alexandra; Ikanda, Dennis; Kushnir, Hadas

    2011-01-01

    Nocturnal carnivores are widely believed to have played an important role in human evolution, driving the need for night-time shelter, the control of fire and our innate fear of darkness. However, no empirical data are available on the effects of darkness on the risks of predation in humans. We performed an extensive analysis of predatory behavior across the lunar cycle on the largest dataset of lion attacks ever assembled and found that African lions are as sensitive to moonlight when hunting humans as when hunting herbivores and that lions are most dangerous to humans when the moon is faint or below the horizon. At night, people are most active between dusk and 10:00 pm, thus most lion attacks occur in the first weeks following the full moon (when the moon rises at least an hour after sunset). Consequently, the full moon is a reliable indicator of impending danger, perhaps helping to explain why the full moon has been the subject of so many myths and misconceptions.

  4. Applications of neural networks to landmark detection in 3-D surface data

    NASA Astrophysics Data System (ADS)

    Arndt, Craig M.

    1992-09-01

    The problem of identifying key landmarks in 3-dimensional surface data is of considerable interest in solving a number of difficult real-world tasks, including object recognition and image processing. The specific problem that we address in this research is to identify the specific landmarks (anatomical) in human surface data. This is a complex task, currently performed visually by an expert human operator. In order to replace these human operators and increase reliability of the data acquisition, we need to develop a computer algorithm which will utilize the interrelations between the 3-dimensional data to identify the landmarks of interest. The current presentation describes a method for designing, implementing, training, and testing a custom architecture neural network which will perform the landmark identification task. We discuss the performance of the net in relationship to human performance on the same task and how this net has been integrated with other AI and traditional programming methods to produce a powerful analysis tool for computer anthropometry.

  5. Catching errors with patient-specific pretreatment machine log file analysis.

    PubMed

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  6. Psychometric Inferences from a Meta-Analysis of Reliability and Internal Consistency Coefficients

    ERIC Educational Resources Information Center

    Botella, Juan; Suero, Manuel; Gambara, Hilda

    2010-01-01

    A meta-analysis of the reliability of the scores from a specific test, also called reliability generalization, allows the quantitative synthesis of its properties from a set of studies. It is usually assumed that part of the variation in the reliability coefficients is due to some unknown and implicit mechanism that restricts and biases the…

  7. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.

    2014-01-01

    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  8. A Reliability Generalization Study of the Marlowe-Crowne Social Desirability Scale.

    ERIC Educational Resources Information Center

    Beretvas, S, Natasha; Meyers, Jason L.; Leite, Walter L.

    2002-01-01

    Conducted a reliability generalization study of the Marlowe-Crowne Social Desirability Scale (D. Crowne and D. Marlowe, 1960). Analysis of 93 studies show that the predicted score reliability for male adolescents was 0.53, and reliability for men's responses was lower than for women's. Discusses the need for further analysis of the scale. (SLD)

  9. Using archetypes to create user panels for usability studies: Streamlining focus groups and user studies.

    PubMed

    Stavrakos, S-K; Ahmed-Kristensen, S; Goldman, T

    2016-09-01

    Designers at the conceptual phase of products such as headphones, stress the importance of comfort, e.g. executing comfort studies and the need for a reliable user panel. This paper proposes a methodology to issue a reliable user panel to represent large populations and validates the proposed framework to predict comfort factors, such as physical fit. Data of 200 heads was analyzed by forming clusters, 9 archetypal people were identified out of a 200 people's ear database. The archetypes were validated by comparing the archetypes' responses on physical fit against those of 20 participants interacting with 6 headsets. This paper suggests a new method of selecting representative user samples for prototype testing compared to costly and time consuming methods which relied on the analysis of human geometry of large populations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The comet assay for the evaluation of genotoxic potential of landfill leachate.

    PubMed

    Widziewicz, Kamila; Kalka, Joanna; Skonieczna, Magdalena; Madej, Paweł

    2012-01-01

    Genotoxic assessment of landfill leachate before and after biological treatment was conducted with two human cell lines (Me45 and NHDF) and Daphnia magna somatic cells. The alkali version of comet assay was used to examine genotoxicity of leachate by DNA strand breaks analysis and its repair dynamics. The leachate samples were collected from Zabrze landfill, situated in the Upper Silesian Industrial District, Poland. Statistically significant differences (Kruskal-Wallice ANOVA rank model) were observed between DNA strand breaks in cells incubated with leachate before and after treatment (P < 0.001). Nonparametric Friedman ANOVA confirmed time-reliable and concentration-reliable cells response to leachate concentration. Examinations of chemical properties showed a marked decrease in leachate parameters after treatment which correlate to reduced genotoxicity towards tested cells. Obtained results demonstrate that biological cotreatment of leachate together with municipal wastewater is an efficient method for its genotoxic potential reduction; however, treated leachate still possessed genotoxic character.

  11. The Comet Assay for the Evaluation of Genotoxic Potential of Landfill Leachate

    PubMed Central

    Widziewicz, Kamila; Kalka, Joanna; Skonieczna, Magdalena; Madej, Paweł

    2012-01-01

    Genotoxic assessment of landfill leachate before and after biological treatment was conducted with two human cell lines (Me45 and NHDF) and Daphnia magna somatic cells. The alkali version of comet assay was used to examine genotoxicity of leachate by DNA strand breaks analysis and its repair dynamics. The leachate samples were collected from Zabrze landfill, situated in the Upper Silesian Industrial District, Poland. Statistically significant differences (Kruskal-Wallice ANOVA rank model) were observed between DNA strand breaks in cells incubated with leachate before and after treatment (P < 0.001). Nonparametric Friedman ANOVA confirmed time-reliable and concentration-reliable cells response to leachate concentration. Examinations of chemical properties showed a marked decrease in leachate parameters after treatment which correlate to reduced genotoxicity towards tested cells. Obtained results demonstrate that biological cotreatment of leachate together with municipal wastewater is an efficient method for its genotoxic potential reduction; however, treated leachate still possessed genotoxic character. PMID:22666120

  12. 16 CFR § 1500.4 - Human experience with hazardous substances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Human experience with hazardous substances... REGULATIONS § 1500.4 Human experience with hazardous substances. (a) Reliable data on human experience with... data, the human experience takes precedence. (b) Experience may show that an article is more or less...

  13. Nuclear Thermal Propulsion Mars Mission Systems Analysis and Requirements Definition

    NASA Technical Reports Server (NTRS)

    Mulqueen, Jack; Chiroux, Robert C.; Thomas, Dan; Crane, Tracie

    2007-01-01

    This paper describes the Mars transportation vehicle design concepts developed by the Marshall Space Flight Center (MSFC) Advanced Concepts Office. These vehicle design concepts provide an indication of the most demanding and least demanding potential requirements for nuclear thermal propulsion systems for human Mars exploration missions from years 2025 to 2035. Vehicle concept options vary from large "all-up" vehicle configurations that would transport all of the elements for a Mars mission on one vehicle. to "split" mission vehicle configurations that would consist of separate smaller vehicles that would transport cargo elements and human crew elements to Mars separately. Parametric trades and sensitivity studies show NTP stage and engine design options that provide the best balanced set of metrics based on safety, reliability, performance, cost and mission objectives. Trade studies include the sensitivity of vehicle performance to nuclear engine characteristics such as thrust, specific impulse and nuclear reactor type. Tbe associated system requirements are aligned with the NASA Exploration Systems Mission Directorate (ESMD) Reference Mars mission as described in the Explorations Systems Architecture Study (ESAS) report. The focused trade studies include a detailed analysis of nuclear engine radiation shield requirements for human missions and analysis of nuclear thermal engine design options for the ESAS reference mission.

  14. Analysis of malondialdehyde in human plasma samples through derivatization with 2,4-dinitrophenylhydrazine by ultrasound-assisted dispersive liquid-liquid microextraction-GC-FID approach.

    PubMed

    Malaei, Reyhane; Ramezani, Amir M; Absalan, Ghodratollah

    2018-05-04

    A sensitive and reliable ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) procedure was developed and validated for extraction and analysis of malondialdehyde (MDA) as an important lipids-peroxidation biomarker in human plasma. In this methodology, to achieve an applicable extraction procedure, the whole optimization processes were performed in human plasma. To convert MDA into readily extractable species, it was derivatized to hydrazone structure-base by 2,4-dinitrophenylhydrazine (DNPH) at 40 °C within 60 min. Influences of experimental variables on the extraction process including type and volume of extraction and disperser solvents, amount of derivatization agent, temperature, pH, ionic strength, sonication and centrifugation times were evaluated. Under the optimal experimental conditions, the enhancement factor and extraction recovery were 79.8 and 95.8%, respectively. The analytical signal linearly (R 2  = 0.9988) responded over a concentration range of 5.00-4000 ng mL -1 with a limit of detection of 0.75 ng mL -1 (S/N = 3) in the plasma sample. To validate the developed procedure, the recommend guidelines of Food and Drug Administration for bioanalytical analysis have been employed. Copyright © 2018. Published by Elsevier B.V.

  15. The effectiveness of web-based, multimedia tutorials for teaching methods of human body composition analysis.

    PubMed

    Buzzell, Paul R; Chamberlain, Valerie M; Pintauro, Stephen J

    2002-12-01

    This study examined the effectiveness of a series of Web-based, multimedia tutorials on methods of human body composition analysis. Tutorials were developed around four body composition topics: hydrodensitometry (underwater weighing), dual-energy X-ray absorptiometry, bioelectrical impedance analysis, and total body electrical conductivity. Thirty-two students enrolled in the course were randomly assigned to learn the material through either the Web-based tutorials only ("Computer"), a traditional lecture format ("Lecture"), or lectures supplemented with Web-based tutorials ("Both"). All students were administered a validated pretest before randomization and an identical posttest at the completion of the course. The reliability of the test was 0.84. The mean score changes from pretest to posttest were not significantly different among the groups (65.4 plus minus 17.31, 78.82 plus minus 21.50, and 76 plus minus 21.22 for the Computer, Both, and Lecture groups, respectively). Additionally, a Likert-type assessment found equally positive attitudes toward all three formats. The results indicate that Web-based tutorials are as effective as the traditional lecture format for teaching these topics.

  16. Quantitative PCR for Genetic Markers of Human Fecal Pollution

    EPA Science Inventory

    Assessment of health risk and fecal bacteria loads associated with human fecal pollution requires reliable host-specific analytical methods and a rapid quantificationapproach. We report the development of quantitative PCR assays for quantification of two recently described human-...

  17. Observing Consistency in Online Communication Patterns for User Re-Identification

    PubMed Central

    Venter, Hein S.

    2016-01-01

    Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas. PMID:27918593

  18. PopHuman: the human population genomics browser.

    PubMed

    Casillas, Sònia; Mulet, Roger; Villegas-Mirón, Pablo; Hervas, Sergi; Sanz, Esteve; Velasco, Daniel; Bertranpetit, Jaume; Laayouni, Hafid; Barbadilla, Antonio

    2018-01-04

    The 1000 Genomes Project (1000GP) represents the most comprehensive world-wide nucleotide variation data set so far in humans, providing the sequencing and analysis of 2504 genomes from 26 populations and reporting >84 million variants. The availability of this sequence data provides the human lineage with an invaluable resource for population genomics studies, allowing the testing of molecular population genetics hypotheses and eventually the understanding of the evolutionary dynamics of genetic variation in human populations. Here we present PopHuman, a new population genomics-oriented genome browser based on JBrowse that allows the interactive visualization and retrieval of an extensive inventory of population genetics metrics. Efficient and reliable parameter estimates have been computed using a novel pipeline that faces the unique features and limitations of the 1000GP data, and include a battery of nucleotide variation measures, divergence and linkage disequilibrium parameters, as well as different tests of neutrality, estimated in non-overlapping windows along the chromosomes and in annotated genes for all 26 populations of the 1000GP. PopHuman is open and freely available at http://pophuman.uab.cat. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Prediction of Human Disease Genes by Human-Mouse Conserved Coexpression Analysis

    PubMed Central

    Grassi, Elena; Damasco, Christian; Silengo, Lorenzo; Oti, Martin; Provero, Paolo; Di Cunto, Ferdinando

    2008-01-01

    Background Even in the post-genomic era, the identification of candidate genes within loci associated with human genetic diseases is a very demanding task, because the critical region may typically contain hundreds of positional candidates. Since genes implicated in similar phenotypes tend to share very similar expression profiles, high throughput gene expression data may represent a very important resource to identify the best candidates for sequencing. However, so far, gene coexpression has not been used very successfully to prioritize positional candidates. Methodology/Principal Findings We show that it is possible to reliably identify disease-relevant relationships among genes from massive microarray datasets by concentrating only on genes sharing similar expression profiles in both human and mouse. Moreover, we show systematically that the integration of human-mouse conserved coexpression with a phenotype similarity map allows the efficient identification of disease genes in large genomic regions. Finally, using this approach on 850 OMIM loci characterized by an unknown molecular basis, we propose high-probability candidates for 81 genetic diseases. Conclusion Our results demonstrate that conserved coexpression, even at the human-mouse phylogenetic distance, represents a very strong criterion to predict disease-relevant relationships among human genes. PMID:18369433

  20. Multiplexed Analysis of Serum Breast and Ovarian Cancer Markers by Means of Suspension Bead-quantum Dot Microarrays

    NASA Astrophysics Data System (ADS)

    Brazhnik, Kristina; Sokolova, Zinaida; Baryshnikova, Maria; Bilan, Regina; Nabiev, Igor; Sukhanova, Alyona

    Multiplexed analysis of cancer markers is crucial for early tumor diagnosis and screening. We have designed lab-on-a-bead microarray for quantitative detection of three breast cancer markers in human serum. Quantum dots were used as bead-bound fluorescent tags for identifying each marker by means of flow cytometry. Antigen-specific beads reliably detected CA 15-3, CEA, and CA 125 in serum samples, providing clear discrimination between the samples with respect to the antigen levels. The novel microarray is advantageous over the routine single-analyte ones due to the simultaneous detection of various markers. Therefore the developed microarray is a promising tool for serum tumor marker profiling.

  1. Preliminary Results Obtained in Integrated Safety Analysis of NASA Aviation Safety Program Technologies

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This is a listing of recent unclassified RTO technical publications processed by the NASA Center for AeroSpace Information from January 1, 2001 through March 31, 2001 available on the NASA Aeronautics and Space Database. Contents include 1) Cognitive Task Analysis; 2) RTO Educational Notes; 3) The Capability of Virtual Reality to Meet Military Requirements; 4) Aging Engines, Avionics, Subsystems and Helicopters; 5) RTO Meeting Proceedings; 6) RTO Technical Reports; 7) Low Grazing Angle Clutter...; 8) Verification and Validation Data for Computational Unsteady Aerodynamics; 9) Space Observation Technology; 10) The Human Factor in System Reliability...; 11) Flight Control Design...; 12) Commercial Off-the-Shelf Products in Defense Applications.

  2. Sampling and analysis techniques for monitoring serum for trace elements.

    PubMed

    Ericson, S P; McHalsky, M L; Rabinow, B E; Kronholm, K G; Arceo, C S; Weltzer, J A; Ayd, S W

    1986-07-01

    We describe techniques for controlling contamination in the sampling and analysis of human serum for trace metals. The relatively simple procedures do not require clean-room conditions. The atomic absorption and atomic emission methods used have been applied in studying zinc, copper, chromium, manganese, molybdenum, selenium, and aluminum concentrations. Values obtained for a group of 16 normal subjects agree with the most reliable values reported in the literature, obtained by much more elaborate techniques. All of these metals can be measured in 3 to 4 mL of serum. The methods may prove especially useful in monitoring concentrations of essential trace elements in blood of patients being maintained on total parenteral nutrition.

  3. Manned Mars Mission program concepts

    NASA Technical Reports Server (NTRS)

    Hamilton, E. C.; Johnson, P.; Pearson, J.; Tucker, W.

    1988-01-01

    This paper describes the SRS Manned Mars Mission and Program Analysis study designed to support a manned expedition to Mars contemplated by NASA for the purposes of initiating human exploration and eventual habitation of this planet. The capabilities of the interactive software package being presently developed by the SRS for the mission/program analysis are described, and it is shown that the interactive package can be used to investigate the impact of various mission concepts on the sensitivity of mass required in LEO, schedules, relative costs, and risk. The results, to date, indicate the need for an earth-to-orbit transportation system much larger than the present STS, reliable long-life support systems, and either advanced propulsion or aerobraking technology.

  4. Reliability of videotaped observational gait analysis in patients with orthopedic impairments

    PubMed Central

    Brunnekreef, Jaap J; van Uden, Caro JT; van Moorsel, Steven; Kooloos, Jan GM

    2005-01-01

    Background In clinical practice, visual gait observation is often used to determine gait disorders and to evaluate treatment. Several reliability studies on observational gait analysis have been described in the literature and generally showed moderate reliability. However, patients with orthopedic disorders have received little attention. The objective of this study is to determine the reliability levels of visual observation of gait in patients with orthopedic disorders. Methods The gait of thirty patients referred to a physical therapist for gait treatment was videotaped. Ten raters, 4 experienced, 4 inexperienced and 2 experts, individually evaluated these videotaped gait patterns of the patients twice, by using a structured gait analysis form. Reliability levels were established by calculating the Intraclass Correlation Coefficient (ICC), using a two-way random design and based on absolute agreement. Results The inter-rater reliability among experienced raters (ICC = 0.42; 95%CI: 0.38–0.46) was comparable to that of the inexperienced raters (ICC = 0.40; 95%CI: 0.36–0.44). The expert raters reached a higher inter-rater reliability level (ICC = 0.54; 95%CI: 0.48–0.60). The average intra-rater reliability of the experienced raters was 0.63 (ICCs ranging from 0.57 to 0.70). The inexperienced raters reached an average intra-rater reliability of 0.57 (ICCs ranging from 0.52 to 0.62). The two expert raters attained ICC values of 0.70 and 0.74 respectively. Conclusion Structured visual gait observation by use of a gait analysis form as described in this study was found to be moderately reliable. Clinical experience appears to increase the reliability of visual gait analysis. PMID:15774012

  5. Enhancing healthcare process design with human factors engineering and reliability science, part 1: setting the context.

    PubMed

    Boston-Fleischhauer, Carol

    2008-01-01

    The design and implementation of efficient, effective, and safe processes are never-ending challenges in healthcare. Less than optimal performance levels and rising concerns about patient safety suggest that traditional process design methods are insufficient to meet design requirements. In this 2-part series, the author presents human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare. An examination of these theories, application approaches, and examples are presented.

  6. Reliability generalization study of the Yale-Brown Obsessive-Compulsive Scale for children and adolescents.

    PubMed

    López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa

    2015-01-01

    The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.

  7. Trades Between Opposition and Conjunction Class Trajectories for Early Human Missions to Mars

    NASA Technical Reports Server (NTRS)

    Mattfeld, Bryan; Stromgren, Chel; Shyface, Hilary; Komar, David R.; Cirillo, William; Goodliff, Kandyce

    2014-01-01

    Candidate human missions to Mars, including NASA's Design Reference Architecture 5.0, have focused on conjunction-class missions with long crewed durations and minimum energy trajectories to reduce total propellant requirements and total launch mass. However, in order to progressively reduce risk and gain experience in interplanetary mission operations, it may be desirable that initial human missions to Mars, whether to the surface or to Mars orbit, have shorter total crewed durations and minimal stay times at the destination. Opposition-class missions require larger total energy requirements relative to conjunction-class missions but offer the potential for much shorter mission durations, potentially reducing risk and overall systems performance requirements. This paper will present a detailed comparison of conjunction-class and opposition-class human missions to Mars vicinity with a focus on how such missions could be integrated into the initial phases of a Mars exploration campaign. The paper will present the results of a trade study that integrates trajectory/propellant analysis, element design, logistics and sparing analysis, and risk assessment to produce a comprehensive comparison of opposition and conjunction exploration mission constructs. Included in the trade study is an assessment of the risk to the crew and the trade offs between the mission duration and element, logistics, and spares mass. The analysis of the mission trade space was conducted using four simulation and analysis tools developed by NASA. Trajectory analyses for Mars destination missions were conducted using VISITOR (Versatile ImpulSive Interplanetary Trajectory OptimizeR), an in-house tool developed by NASA Langley Research Center. Architecture elements were evaluated using EXploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), a parametric modeling tool that generates exploration architectures through an integrated systems model. Logistics analysis was conducted using NASA's Human Exploration Logistics Model (HELM), and sparing allocation predictions were generated via the Exploration Maintainability Analysis Tool (EMAT), which is a probabilistic simulation engine that evaluates trades in spacecraft reliability and sparing requirements based on spacecraft system maintainability and reparability.

  8. Dating the origin and dispersal of Human Papillomavirus type 16 on the basis of ancestral human migrations.

    PubMed

    Zehender, Gianguglielmo; Frati, Elena Rosanna; Martinelli, Marianna; Bianchi, Silvia; Amendola, Antonella; Ebranati, Erika; Ciccozzi, Massimo; Galli, Massimo; Lai, Alessia; Tanzi, Elisabetta

    2016-04-01

    A major limitation when reconstructing the origin and evolution of HPV-16 is the lack of reliable substitution rate estimates for the viral genes. On the basis of the hypothesis of human HPV-16 co-divergence, we estimated a mean evolutionary rate of 1.47×10(-7) (95% HPD=0.64-2.47×10(-7)) subs/site/year for the viral LCR region. The results of a Bayesian phylogeographical analysis suggest that the currently circulating HPV-16 most probably originated in Africa about 110 thousand years ago (Kya), before giving rise to four known geographical lineages: the Asian/European lineage, which most probably originated in Asia a mean 38 Kya, and the Asian/American and two African lineages, which probably respectively originated about 33 and 27 Kya. These data closely reflect current hypotheses concerning modern human expansion based on studies of mitochondrial DNA phylogeny. The correlation between ancient human migration and the present HPV phylogeny may be explained by the co-existence of modes of transmission other than sexual transmission. Copyright © 2016. Published by Elsevier B.V.

  9. Alternative Methods for Calculating Intercoder Reliability in Content Analysis: Kappa, Weighted Kappa and Agreement Charts Procedures.

    ERIC Educational Resources Information Center

    Kang, Namjun

    If content analysis is to satisfy the requirement of objectivity, measures and procedures must be reliable. Reliability is usually measured by the proportion of agreement of all categories identically coded by different coders. For such data to be empirically meaningful, a high degree of inter-coder reliability must be demonstrated. Researchers in…

  10. Overview of RICOR's reliability theoretical analysis, accelerated life demonstration test results and verification by field data

    NASA Astrophysics Data System (ADS)

    Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey

    2018-05-01

    The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.

  11. Robot decisions: on the importance of virtuous judgment in clinical decision making.

    PubMed

    Gelhaus, Petra

    2011-10-01

    The aim of this article is to argue for the necessity of emotional professional virtues in the understanding of good clinical practice. This understanding is required for a proper balance of capacities in medical education and further education of physicians. For this reason an ideal physician, incarnating the required virtues, skills and knowledge is compared with a non-emotional robot that is bound to moral rules. This fictive confrontation is meant to clarify why certain demands on the personality of the physician are justified, in addition to a rule- and principle-based moral orientation and biomedical knowledge and skills. Philosophical analysis of thought experiments inspired by science fiction literature by Isaac Asimov. Although prima facie a rule-oriented robot seems more reliable and trustworthy, the complexity of clinical judgment is not met by an encompassing and never contradictory set of rules from which one could logically derive decisions. There are different ways how the robot could still work, but at the cost of the predictability of its behaviour and its moral orientation. In comparison, a virtuous human doctor who is also bound to these rules, although less strictly, will more reliably keep at moral objectives, be understandable, be more flexible in case the rules come to their limits, and will be more predictable in these critical situations. Apart from these advantages of the virtuous human doctor referring to her own person, the most problematic deficit of the robot is its lacking deeper understanding of the inner mental events of patients which makes good contact, good communication and good influence impossible. Although an infallibly rule-oriented robot seems more reliable at first view, in situations that require complex decisions like clinical practice the agency of a moral human person is more trustworthy. Furthermore, the understanding of the patient's emotions must remain insufficient for a non-emotional, non-human being. Because these are crucial preconditions for good clinical practice, enough attention should be given to develop these virtues and emotional skills, in addition to the usual attention on knowledge, technical skills and the obedience to moral rules and principles. © 2011 Blackwell Publishing Ltd.

  12. Aping humans: age and sex effects in chimpanzee (Pan troglodytes) and human (Homo sapiens) personality.

    PubMed

    King, James E; Weiss, Alexander; Sisco, Melissa M

    2008-11-01

    Ratings of 202 chimpanzees on 43 personality descriptor adjectives were used to calculate scores on five domains analogous to the human Five-Factor Model and a chimpanzee-specific Dominance domain. Male and female chimpanzees were divided into five age groups ranging from juvenile to old adult. Internal consistencies and interrater reliabilities of factors were stable across age groups and approximately 6.8 year retest reliabilities were high. Age-related declines in Extraversion and Openness and increases in Agreeableness and Conscientiousness paralleled human age differences. The mean change in absolute standardized units for all five factors was virtually identical in humans and chimpanzees after adjustment for different developmental rates. Consistent with their aggressive behavior in the wild, male chimpanzees were rated as more aggressive, emotional, and impulsive than females. Chimpanzee sex differences in personality were greater than comparable human gender differences. These findings suggest that chimpanzee and human personality develop via an unfolding maturational process. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  13. Proceedings of the NATO-Advanced Study Institute on Computer Aided Analysis of Rigid and Flexible Mechanical Systems Held in Troia, Portugal on June 27-July 9, 1993. Volume 1. Main Lectures

    DTIC Science & Technology

    1993-07-09

    real-time simulation capabilities, highly non -linear control devices, work space path planing, active control of machine flexibilities and reliability...P.M., "The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement," Journal of Experimental Psychology, Vol 47, No...driven many research groups in the challenging problem of flexible sy,;tems with an increasing interaction with finite element methodologies. Basic

  14. Analysis of Food Hub Commerce and Participation Using Agent-Based Modeling: Integrating Financial and Social Drivers.

    PubMed

    Krejci, Caroline C; Stone, Richard T; Dorneich, Michael C; Gilbert, Stephen B

    2016-02-01

    Factors influencing long-term viability of an intermediated regional food supply network (food hub) were modeled using agent-based modeling techniques informed by interview data gathered from food hub participants. Previous analyses of food hub dynamics focused primarily on financial drivers rather than social factors and have not used mathematical models. Based on qualitative and quantitative data gathered from 22 customers and 11 vendors at a midwestern food hub, an agent-based model (ABM) was created with distinct consumer personas characterizing the range of consumer priorities. A comparison study determined if the ABM behaved differently than a model based on traditional economic assumptions. Further simulation studies assessed the effect of changes in parameters, such as producer reliability and the consumer profiles, on long-term food hub sustainability. The persona-based ABM model produced different and more resilient results than the more traditional way of modeling consumers. Reduced producer reliability significantly reduced trade; in some instances, a modest reduction in reliability threatened the sustainability of the system. Finally, a modest increase in price-driven consumers at the outset of the simulation quickly resulted in those consumers becoming a majority of the overall customer base. Results suggest that social factors, such as desire to support the community, can be more important than financial factors. An ABM of food hub dynamics, based on human factors data gathered from the field, can be a useful tool for policy decisions. Similar approaches can be used for modeling customer dynamics with other sustainable organizations. © 2015, Human Factors and Ergonomics Society.

  15. Citizen science: A new perspective to advance spatial pattern evaluation in hydrology.

    PubMed

    Koch, Julian; Stisen, Simon

    2017-01-01

    Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics.

  16. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  17. The Stability and Validity of Automated Vocal Analysis in Preverbal Preschoolers With Autism Spectrum Disorder

    PubMed Central

    Woynaroski, Tiffany; Oller, D. Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul

    2017-01-01

    Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. PMID:27459107

  18. Sociotechnical attributes of safe and unsafe work systems

    PubMed Central

    Kleiner, Brian M.; Hettinger, Lawrence J.; DeJoy, David M.; Huang, Yuang-Hsiang; Love, Peter E.D.

    2015-01-01

    Theoretical and practical approaches to safety based on sociotechnical systems principles place heavy emphasis on the intersections between social–organisational and technical–work process factors. Within this perspective, work system design emphasises factors such as the joint optimisation of social and technical processes, a focus on reliable human–system performance and safety metrics as design and analysis criteria, the maintenance of a realistic and consistent set of safety objectives and policies, and regular access to the expertise and input of workers. We discuss three current approaches to the analysis and design of complex sociotechnical systems: human–systems integration, macroergonomics and safety climate. Each approach emphasises key sociotechnical systems themes, and each prescribes a more holistic perspective on work systems than do traditional theories and methods. We contrast these perspectives with historical precedents such as system safety and traditional human factors and ergonomics, and describe potential future directions for their application in research and practice. Practitioner Summary: The identification of factors that can reliably distinguish between safe and unsafe work systems is an important concern for ergonomists and other safety professionals. This paper presents a variety of sociotechnical systems perspectives on intersections between social–organisational and technology–work process factors as they impact work system analysis, design and operation. PMID:25909756

  19. Performance Analysis of IEEE 802.15.6 CSMA/CA Protocol for WBAN Medical Scenario through DTMC Model.

    PubMed

    Kumar, Vivek; Gupta, Bharat

    2016-12-01

    The newly drafted IEEE 802.15.6 standard for Wireless Body Area Networks (WBAN) has been concentrating on a numerous medical and non-medical applications. Such short range wireless communication standard offers ultra-low power consumption with variable data rates from few Kbps to Mbps in, on or around the proximity of the human body. In this paper, the performance analysis of carrier sense multiple access with collision avoidance (CSMA/CA) scheme based on IEEE 802.15.6 standard in terms of throughput, reliability, clear channel assessment (CCA) failure probability, packet drop probability, and end-to-end delay has been presented. We have developed a discrete-time Markov chain (DTMC) to significantly evaluate the performances of IEEE 802.15.6 CSMA/CA under non-ideal channel condition having saturated traffic condition including node wait time and service time. We also visualize that, as soon as the payload length increases the CCA failure probability increases, which results in lower node's reliability. Also, we have calculated the end-to-end delay in order to prioritize the node wait time cause by backoff and retransmission. The user priority (UP) wise DTMC analysis has been performed to show the importance of the standard especially for medical scenario.

  20. Bringing memory fMRI to the clinic: comparison of seven memory fMRI protocols in temporal lobe epilepsy.

    PubMed

    Towgood, Karren; Barker, Gareth J; Caceres, Alejandro; Crum, William R; Elwes, Robert D C; Costafreda, Sergi G; Mehta, Mitul A; Morris, Robin G; von Oertzen, Tim J; Richardson, Mark P

    2015-04-01

    fMRI is increasingly implemented in the clinic to assess memory function. There are multiple approaches to memory fMRI, but limited data on advantages and reliability of different methods. Here, we compared effect size, activation lateralisation, and between-sessions reliability of seven memory fMRI protocols: Hometown Walking (block design), Scene encoding (block design and event-related design), Picture encoding (block and event-related), and Word encoding (block and event-related). All protocols were performed on three occasions in 16 patients with temporal lobe epilepsy (TLE). Group T-maps showed activity bilaterally in medial temporal lobe for all protocols. Using ANOVA, there was an interaction between hemisphere and seizure-onset lateralisation (P = 0.009) and between hemisphere, protocol and seizure-onset lateralisation (P = 0.002), showing that the distribution of memory-related activity between left and right temporal lobes differed between protocols and between patients with left-onset and right-onset seizures. Using voxelwise intraclass Correlation Coefficient, between-sessions reliability was best for Hometown and Scenes (block and event). The between-sessions spatial overlap of activated voxels was also greatest for Hometown and Scenes. Lateralisation of activity between hemispheres was most reliable for Scenes (block and event) and Words (event). Using receiver operating characteristic analysis to explore the ability of each fMRI protocol to classify patients as left-onset or right-onset TLE, only the Words (event) protocol achieved a significantly above-chance classification of patients at all three sessions. We conclude that Words (event) protocol shows the best combination of between-sessions reliability of the distribution of activity between hemispheres and reliable ability to distinguish between left-onset and right-onset patients. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  1. Geometric classification of scalp hair for valid drug testing, 6 more reliable than 8 hair curl groups.

    PubMed

    Mkentane, K; Van Wyk, J C; Sishi, N; Gumedze, F; Ngoepe, M; Davids, L M; Khumalo, N P

    2017-01-01

    Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Each rater classified 480 hairs on each occasion. No rater classified any volunteer's 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine.

  2. Replicating human expertise of mechanical ventilation waveform analysis in detecting patient-ventilator cycling asynchrony using machine learning.

    PubMed

    Gholami, Behnood; Phan, Timothy S; Haddad, Wassim M; Cason, Andrew; Mullis, Jerry; Price, Levi; Bailey, James M

    2018-06-01

    - Acute respiratory failure is one of the most common problems encountered in intensive care units (ICU) and mechanical ventilation is the mainstay of supportive therapy for such patients. A mismatch between ventilator delivery and patient demand is referred to as patient-ventilator asynchrony (PVA). An important hurdle in addressing PVA is the lack of a reliable framework for continuously and automatically monitoring the patient and detecting various types of PVA. - The problem of replicating human expertise of waveform analysis for detecting cycling asynchrony (i.e., delayed termination, premature termination, or none) was investigated in a pilot study involving 11 patients in the ICU under invasive mechanical ventilation. A machine learning framework is used to detect cycling asynchrony based on waveform analysis. - A panel of five experts with experience in PVA evaluated a total of 1377 breath cycles from 11 mechanically ventilated critical care patients. The majority vote was used to label each breath cycle according to cycling asynchrony type. The proposed framework accurately detected the presence or absence of cycling asynchrony with sensitivity (specificity) of 89% (99%), 94% (98%), and 97% (93%) for delayed termination, premature termination, and no cycling asynchrony, respectively. The system showed strong agreement with human experts as reflected by the kappa coefficients of 0.90, 0.91, and 0.90 for delayed termination, premature termination, and no cycling asynchrony, respectively. - The pilot study establishes the feasibility of using a machine learning framework to provide waveform analysis equivalent to an expert human. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Are animal models predictive for human postmortem muscle protein degradation?

    PubMed

    Ehrenfellner, Bianca; Zissler, Angela; Steinbacher, Peter; Monticelli, Fabio C; Pittner, Stefan

    2017-11-01

    A most precise determination of the postmortem interval (PMI) is a crucial aspect in forensic casework. Although there are diverse approaches available to date, the high heterogeneity of cases together with the respective postmortal changes often limit the validity and sufficiency of many methods. Recently, a novel approach for time since death estimation by the analysis of postmortal changes of muscle proteins was proposed. It is however necessary to improve the reliability and accuracy, especially by analysis of possible influencing factors on protein degradation. This is ideally investigated on standardized animal models that, however, require legitimization by a comparison of human and animal tissue, and in this specific case of protein degradation profiles. Only if protein degradation events occur in comparable fashion within different species, respective findings can sufficiently be transferred from the animal model to application in humans. Therefor samples from two frequently used animal models (mouse and pig), as well as forensic cases with representative protein profiles of highly differing PMIs were analyzed. Despite physical and physiological differences between species, western blot analysis revealed similar patterns in most of the investigated proteins. Even most degradation events occurred in comparable fashion. In some other aspects, however, human and animal profiles depicted distinct differences. The results of this experimental series clearly indicate the huge importance of comparative studies, whenever animal models are considered. Although animal models could be shown to reflect the basic principles of protein degradation processes in humans, we also gained insight in the difficulties and limitations of the applicability of the developed methodology in different mammalian species regarding protein specificity and methodic functionality.

  4. HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.

    PubMed

    Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo

    2014-10-01

    To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Columbus safety and reliability

    NASA Astrophysics Data System (ADS)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  6. Mass and Reliability System (MaRS)

    NASA Technical Reports Server (NTRS)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions

  7. Large-scale network integration in the human brain tracks temporal fluctuations in memory encoding performance.

    PubMed

    Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi

    2018-06-18

    Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.

  8. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  9. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  10. Forensic Hair Differentiation Using Attenuated Total Reflection Fourier Transform Infrared (ATR FT-IR) Spectroscopy.

    PubMed

    Manheim, Jeremy; Doty, Kyle C; McLaughlin, Gregory; Lednev, Igor K

    2016-07-01

    Hair and fibers are common forms of trace evidence found at crime scenes. The current methodology of microscopic examination of potential hair evidence is absent of statistical measures of performance, and examiner results for identification can be subjective. Here, attenuated total reflection (ATR) Fourier transform-infrared (FT-IR) spectroscopy was used to analyze synthetic fibers and natural hairs of human, cat, and dog origin. Chemometric analysis was used to differentiate hair spectra from the three different species, and to predict unknown hairs to their proper species class, with a high degree of certainty. A species-specific partial least squares discriminant analysis (PLSDA) model was constructed to discriminate human hair from cat and dog hairs. This model was successful in distinguishing between the three classes and, more importantly, all human samples were correctly predicted as human. An external validation resulted in zero false positive and false negative assignments for the human class. From a forensic perspective, this technique would be complementary to microscopic hair examination, and in no way replace it. As such, this methodology is able to provide a statistical measure of confidence to the identification of a sample of human, cat, and dog hair, which was called for in the 2009 National Academy of Sciences report. More importantly, this approach is non-destructive, rapid, can provide reliable results, and requires no sample preparation, making it of ample importance to the field of forensic science. © The Author(s) 2016.

  11. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  12. Search by photo methodology for signature properties assessment by human observers

    NASA Astrophysics Data System (ADS)

    Selj, Gorm K.; Heinrich, Daniela H.

    2015-05-01

    Reliable, low-cost and simple methods for assessment of signature properties for military purposes are very important. In this paper we present such an approach that uses human observers in a search by photo assessment of signature properties of generic test targets. The method was carried out by logging a large number of detection times of targets recorded in relevant terrain backgrounds. The detection times were harvested by using human observers searching for targets in scene images shown by a high definition pc screen. All targets were identically located in each "search image", allowing relative comparisons (and not just rank by order) of targets. To avoid biased detections, each observer only searched for one target per scene. Statistical analyses were carried out for the detection times data. Analysis of variance was chosen if detection times distribution associated with all targets satisfied normality, and non-parametric tests, such as Wilcoxon's rank test, if otherwise. The new methodology allows assessment of signature properties in a reproducible, rapid and reliable setting. Such assessments are very complex as they must sort out what is of relevance in a signature test, but not loose information of value. We believe that choosing detection times as the primary variable for a comparison of signature properties, allows a careful and necessary inspection of observer data as the variable is continuous rather than discrete. Our method thus stands in opposition to approaches based on detections by subsequent, stepwise reductions in distance to target, or based on probability of detection.

  13. Quantifying plant colour and colour difference as perceived by humans using digital images.

    PubMed

    Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.

  14. Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images

    PubMed Central

    Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275

  15. Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1997-01-01

    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.

  16. Affective traits link to reliable neural markers of incentive anticipation.

    PubMed

    Wu, Charlene C; Samanez-Larkin, Gregory R; Katovich, Kiefer; Knutson, Brian

    2014-01-01

    While theorists have speculated that different affective traits are linked to reliable brain activity during anticipation of gains and losses, few have directly tested this prediction. We examined these associations in a community sample of healthy human adults (n=52) as they played a Monetary Incentive Delay task while undergoing functional magnetic resonance imaging (FMRI). Factor analysis of personality measures revealed that subjects independently varied in trait Positive Arousal and trait Negative Arousal. In a subsample (n=14) retested over 2.5years later, left nucleus accumbens (NAcc) activity during anticipation of large gains (+$5.00) and right anterior insula activity during anticipation of large losses (-$5.00) showed significant test-retest reliability (intraclass correlations>0.50, p's<0.01). In the full sample (n=52), trait Positive Arousal correlated with individual differences in left NAcc activity during anticipation of large gains, while trait Negative Arousal correlated with individual differences in right anterior insula activity during anticipation of large losses. Associations of affective traits with neural activity were not attributable to the influence of other potential confounds (including sex, age, wealth, and motion). Together, these results demonstrate selective links between distinct affective traits and reliably-elicited activity in neural circuits associated with anticipation of gain versus loss. The findings thus reveal neural markers for affective dimensions of healthy personality, and potentially for related psychiatric symptoms. © 2013. Published by Elsevier Inc. All rights reserved.

  17. Affective traits link to reliable neural markers of incentive anticipation

    PubMed Central

    Wu, Charlene C.; Samanez-Larkin, Gregory R.; Katovich, Kiefer; Knutson, Brian

    2013-01-01

    While theorists have speculated that different affective traits are linked to reliable brain activity during anticipation of gains and losses, few have directly tested this prediction. We examined these associations in a community sample of healthy human adults (n = 52) as they played a Monetary Incentive Delay Task while undergoing functional magnetic resonance imaging (FMRI). Factor analysis of personality measures revealed that subjects independently varied in trait Positive Arousal and Negative Arousal. In a subsample (n = 14) retested over 2.5 years later, left nucleus accumbens (NAcc) activity during anticipation of large gains (+$5.00) and right anterior insula activity during anticipation of large losses (−$5.00) showed significant test-retest reliability (intraclass correlations > 0.50, p’s < 0.01). In the full sample (n = 52), trait Positive Arousal correlated with individual differences in left NAcc activity during anticipation of large gains, while trait Negative Arousal correlated with individual differences in right anterior insula activity during anticipation of large losses. Associations of affective traits with neural activity were not attributable to the influence of other potential confounds (including sex, age, wealth, and motion). Together, these results demonstrate selective links between distinct affective traits and reliably-elicited activity in neural circuits associated with anticipation of gain versus loss. The findings thus reveal neural markers for affective dimensions of healthy personality, and potentially for related psychiatric symptoms. PMID:24001457

  18. Management of reliability and maintainability; a disciplined approach to fleet readiness

    NASA Technical Reports Server (NTRS)

    Willoughby, W. J., Jr.

    1981-01-01

    Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.

  19. Reliability of resting-state microstate features in electroencephalography.

    PubMed

    Khanna, Arjun; Pascual-Leone, Alvaro; Farzan, Faranak

    2014-01-01

    Electroencephalographic (EEG) microstate analysis is a method of identifying quasi-stable functional brain states ("microstates") that are altered in a number of neuropsychiatric disorders, suggesting their potential use as biomarkers of neurophysiological health and disease. However, use of EEG microstates as neurophysiological biomarkers requires assessment of the test-retest reliability of microstate analysis. We analyzed resting-state, eyes-closed, 30-channel EEG from 10 healthy subjects over 3 sessions spaced approximately 48 hours apart. We identified four microstate classes and calculated the average duration, frequency, and coverage fraction of these microstates. Using Cronbach's α and the standard error of measurement (SEM) as indicators of reliability, we examined: (1) the test-retest reliability of microstate features using a variety of different approaches; (2) the consistency between TAAHC and k-means clustering algorithms; and (3) whether microstate analysis can be reliably conducted with 19 and 8 electrodes. The approach of identifying a single set of "global" microstate maps showed the highest reliability (mean Cronbach's α > 0.8, SEM ≈ 10% of mean values) compared to microstates derived by each session or each recording. There was notably low reliability in features calculated from maps extracted individually for each recording, suggesting that the analysis is most reliable when maps are held constant. Features were highly consistent across clustering methods (Cronbach's α > 0.9). All features had high test-retest reliability with 19 and 8 electrodes. High test-retest reliability and cross-method consistency of microstate features suggests their potential as biomarkers for assessment of the brain's neurophysiological health.

  20. Development of a novel forensic STR multiplex for ancestry analysis and extended identity testing.

    PubMed

    Phillips, Chris; Fernandez-Formoso, Luis; Gelabert-Besada, Miguel; Garcia-Magariños, Manuel; Santos, Carla; Fondevila, Manuel; Carracedo, Angel; Lareu, Maria Victoria

    2013-04-01

    There is growing interest in developing additional DNA typing techniques to provide better investigative leads in forensic analysis. These include inference of genetic ancestry and prediction of common physical characteristics of DNA donors. To date, forensic ancestry analysis has centered on population-divergent SNPs but these binary loci cannot reliably detect DNA mixtures, common in forensic samples. Furthermore, STR genotypes, forming the principal DNA profiling system, are not routinely combined with forensic SNPs to strengthen frequency data available for ancestry inference. We report development of a 12-STR multiplex composed of ancestry informative marker STRs (AIM-STRs) selected from 434 tetranucleotide repeat loci. We adapted our online Bayesian classifier for AIM-SNPs: Snipper, to handle multiallele STR data using frequency-based training sets. We assessed the ability of the 12-plex AIM-STRs to differentiate CEPH Human Genome Diversity Panel populations, plus their informativeness combined with established forensic STRs and AIM-SNPs. We found combining STRs and SNPs improves the success rate of ancestry assignments while providing a reliable mixture detection system lacking from SNP analysis alone. As the 12 STRs generally show a broad range of alleles in all populations, they provide highly informative supplementary STRs for extended relationship testing and identification of missing persons with incomplete reference pedigrees. Lastly, mixed marker approaches (combining STRs with binary loci) for simple ancestry inference tests beyond forensic analysis bring advantages and we discuss the genotyping options available. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Culture-Independent Analysis of Probiotic Products by Denaturing Gradient Gel Electrophoresis

    PubMed Central

    Temmerman, R.; Scheirlinck, I.; Huys, G.; Swings, J.

    2003-01-01

    In order to obtain functional and safe probiotic products for human consumption, fast and reliable quality control of these products is crucial. Currently, analysis of most probiotics is still based on culture-dependent methods involving the use of specific isolation media and identification of a limited number of isolates, which makes this approach relatively insensitive, laborious, and time-consuming. In this study, a collection of 10 probiotic products, including four dairy products, one fruit drink, and five freeze-dried products, were subjected to microbial analysis by using a culture-independent approach, and the results were compared with the results of a conventional culture-dependent analysis. The culture-independent approach involved extraction of total bacterial DNA directly from the product, PCR amplification of the V3 region of the 16S ribosomal DNA, and separation of the amplicons on a denaturing gradient gel. Digital capturing and processing of denaturing gradient gel electrophoresis (DGGE) band patterns allowed direct identification of the amplicons at the species level. This whole culture-independent approach can be performed in less than 30 h. Compared with culture-dependent analysis, the DGGE approach was found to have a much higher sensitivity for detection of microbial strains in probiotic products in a fast, reliable, and reproducible manner. Unfortunately, as reported in previous studies in which the culture-dependent approach was used, a rather high percentage of probiotic products suffered from incorrect labeling and yielded low bacterial counts, which may decrease their probiotic potential. PMID:12513998

  2. Probabilistic simulation of the human factor in structural reliability

    NASA Astrophysics Data System (ADS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-09-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  3. Probabilistic Simulation of the Human Factor in Structural Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-01-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  4. A visual analytic framework for data fusion in investigative intelligence

    NASA Astrophysics Data System (ADS)

    Cai, Guoray; Gross, Geoff; Llinas, James; Hall, David

    2014-05-01

    Intelligence analysis depends on data fusion systems to provide capabilities of detecting and tracking important objects, events, and their relationships in connection to an analytical situation. However, automated data fusion technologies are not mature enough to offer reliable and trustworthy information for situation awareness. Given the trend of increasing sophistication of data fusion algorithms and loss of transparency in data fusion process, analysts are left out of the data fusion process cycle with little to no control and confidence on the data fusion outcome. Following the recent rethinking of data fusion as human-centered process, this paper proposes a conceptual framework towards developing alternative data fusion architecture. This idea is inspired by the recent advances in our understanding of human cognitive systems, the science of visual analytics, and the latest thinking about human-centered data fusion. Our conceptual framework is supported by an analysis of the limitation of existing fully automated data fusion systems where the effectiveness of important algorithmic decisions depend on the availability of expert knowledge or the knowledge of the analyst's mental state in an investigation. The success of this effort will result in next generation data fusion systems that can be better trusted while maintaining high throughput.

  5. Computational Approaches to Phenotyping

    PubMed Central

    Lussier, Yves A.; Liu, Yang

    2007-01-01

    The recent completion of the Human Genome Project has made possible a high-throughput “systems approach” for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype–phenotype associations, or “phenomic associations.” The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies. PMID:17202287

  6. 3D polylactide-based scaffolds for studying human hepatocarcinoma processes in vitro

    NASA Astrophysics Data System (ADS)

    Scaffaro, Roberto; Lo Re, Giada; Rigogliuso, Salvatrice; Ghersi, Giulio

    2012-08-01

    We evaluated the combination of leaching techniques and melt blending of polymers and particles for the preparation of highly interconnected three-dimensional polymeric porous scaffolds for in vitro studies of human hepatocarcinoma processes. More specifically, sodium chloride and poly(ethylene glycol) (PEG) were used as water-soluble porogens to form porous and solvent-free poly(L,D-lactide) (PLA)-based scaffolds. Several characterization techniques, including porosimetry, image analysis and thermogravimetry, were combined to improve the reliability of measurements and mapping of the size, distribution and microarchitecture of pores. We also investigated the effect of processing, in PLA-based blends, on the simultaneous bulk/surface modifications and pore architectures in the scaffolds, and assessed the effects on human hepatocarcinoma viability and cell adhesion. The influence of PEG molecular weight on the scaffold morphology and cell viability and adhesion were also investigated. Morphological studies indicated that it was possible to obtain scaffolds with well-interconnected pores of assorted sizes. The analysis confirmed that SK-Hep1 cells adhered well to the polymeric support and emitted surface protrusions necessary to grow and differentiate three-dimensional systems. PEGs with higher molecular weight showed the best results in terms of cell adhesion and viability.

  7. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  8. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  9. Attributing runoff changes to climate variability and human activities: uncertainty analysis using four monthly water balance models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shuai; Xiong, Lihua; Li, Hong-Yi

    2015-05-26

    Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less

  10. An evaluation of tyramide signal amplification and archived fixed and frozen tissue in microarray gene expression analysis

    PubMed Central

    Karsten, Stanislav L.; Van Deerlin, Vivianna M. D.; Sabatti, Chiara; Gill, Lisa H.; Geschwind, Daniel H.

    2002-01-01

    Archival formalin-fixed, paraffin-embedded and ethanol-fixed tissues represent a potentially invaluable resource for gene expression analysis, as they are the most widely available material for studies of human disease. Little data are available evaluating whether RNA obtained from fixed (archival) tissues could produce reliable and reproducible microarray expression data. Here we compare the use of RNA isolated from human archival tissues fixed in ethanol and formalin to frozen tissue in cDNA microarray experiments. Since an additional factor that can limit the utility of archival tissue is the often small quantities available, we also evaluate the use of the tyramide signal amplification method (TSA), which allows the use of small amounts of RNA. Detailed analysis indicates that TSA provides a consistent and reproducible signal amplification method for cDNA microarray analysis, across both arrays and the genes tested. Analysis of this method also highlights the importance of performing non-linear channel normalization and dye switching. Furthermore, archived, fixed specimens can perform well, but not surprisingly, produce more variable results than frozen tissues. Consistent results are more easily obtainable using ethanol-fixed tissues, whereas formalin-fixed tissue does not typically provide a useful substrate for cDNA synthesis and labeling. PMID:11788730

  11. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  12. Reliable enumeration of malaria parasites in thick blood films using digital image analysis.

    PubMed

    Frean, John A

    2009-09-23

    Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.

  13. Reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy and American Society for Reproductive Medicine classification systems for congenital uterine anomalies detected using three-dimensional ultrasonography.

    PubMed

    Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan

    2015-09-01

    To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  14. Site-specific landslide assessment in Alpine area using a reliable integrated monitoring system

    NASA Astrophysics Data System (ADS)

    Romeo, Saverio; Di Matteo, Lucio; Kieffer, Daniel Scott

    2016-04-01

    Rockfalls are one of major cause of landslide fatalities around the world. The present work discusses the reliability of integrated monitoring of displacements in a rockfall within the Alpine region (Salzburg Land - Austria), taking into account also the effect of the ongoing climate change. Due to the unpredictability of the frequency and magnitude, that threatens human lives and infrastructure, frequently it is necessary to implement an efficient monitoring system. For this reason, during the last decades, integrated monitoring systems of unstable slopes were widely developed and used (e.g., extensometers, cameras, remote sensing, etc.). In this framework, Remote Sensing techniques, such as GBInSAR technique (Groung-Based Interferometric Synthetic Aperture Radar), have emerged as efficient and powerful tools for deformation monitoring. GBInSAR measurements can be useful to achieve an early warning system using surface deformation parameters as ground displacement or inverse velocity (for semi-empirical forecasting methods). In order to check the reliability of GBInSAR and to monitor the evolution of landslide, it is very important to integrate different techniques. Indeed, a multi-instrumental approach is essential to investigate movements both in surface and in depth and the use of different monitoring techniques allows to perform a cross analysis of the data and to minimize errors, to check the data quality and to improve the monitoring system. During 2013, an intense and complete monitoring campaign has been conducted on the Ingelsberg landslide. By analyzing both historical temperature series (HISTALP) recorded during the last century and those from local weather stations, temperature values (Autumn-Winter, Winter and Spring) are clearly increased in Bad Hofgastein area as well as in Alpine region. As consequence, in the last decades the rockfall events have been shifted from spring to summer due to warmer winters. It is interesting to point out that temperature values recorded in the valley and on the slope show a good relationship indicating that the climatic monitoring is reliable. In addition, the landslide displacement monitoring is reliable as well: the comparison between displacements in depth by extensometers and in surface by GBInSAR - referred to March-December 2013 - shows ad high reliability as confirmed by the inter-rater reliability analysis (Pearson correlation coefficient higher than 0.9). In conclusion, the reliability of the monitoring system confirms that data can be useful to improve the knowledge on rockfall kinematic and to develop an accurate early warning system useful for civil protection issues.

  15. Calcium-deficiency assessment and biomarker identification by an integrated urinary metabonomics analysis

    PubMed Central

    2013-01-01

    Background Calcium deficiency is a global public-health problem. Although the initial stage of calcium deficiency can lead to metabolic alterations or potential pathological changes, calcium deficiency is difficult to diagnose accurately. Moreover, the details of the molecular mechanism of calcium deficiency remain somewhat elusive. To accurately assess and provide appropriate nutritional intervention, we carried out a global analysis of metabolic alterations in response to calcium deficiency. Methods The metabolic alterations associated with calcium deficiency were first investigated in a rat model, using urinary metabonomics based on ultra-performance liquid chromatography coupled with quadrupole time-of-flight tandem mass spectrometry and multivariate statistical analysis. Correlations between dietary calcium intake and the biomarkers identified from the rat model were further analyzed to confirm the potential application of these biomarkers in humans. Results Urinary metabolic-profiling analysis could preliminarily distinguish between calcium-deficient and non-deficient rats after a 2-week low-calcium diet. We established an integrated metabonomics strategy for identifying reliable biomarkers of calcium deficiency using a time-course analysis of discriminating metabolites in a low-calcium diet experiment, repeating the low-calcium diet experiment and performing a calcium-supplement experiment. In total, 27 biomarkers were identified, including glycine, oxoglutaric acid, pyrophosphoric acid, sebacic acid, pseudouridine, indoxyl sulfate, taurine, and phenylacetylglycine. The integrated urinary metabonomics analysis, which combined biomarkers with regular trends of change (types A, B, and C), could accurately assess calcium-deficient rats at different stages and clarify the dynamic pathophysiological changes and molecular mechanism of calcium deficiency in detail. Significant correlations between calcium intake and two biomarkers, pseudouridine (Pearson correlation, r = 0.53, P = 0.0001) and citrate (Pearson correlation, r = -0.43, P = 0.001), were further confirmed in 70 women. Conclusions To our knowledge, this is the first report of reliable biomarkers of calcium deficiency, which were identified using an integrated strategy. The identified biomarkers give new insights into the pathophysiological changes and molecular mechanisms of calcium deficiency. The correlations between calcium intake and two of the biomarkers provide a rationale or potential for further assessment and elucidation of the metabolic responses of calcium deficiency in humans. PMID:23537001

  16. Quantifying the test-retest reliability of cerebral blood flow measurements in a clinical model of on-going post-surgical pain: A study using pseudo-continuous arterial spin labelling.

    PubMed

    Hodkinson, Duncan J; Krause, Kristina; Khawaja, Nadine; Renton, Tara F; Huggins, John P; Vennart, William; Thacker, Michael A; Mehta, Mitul A; Zelaya, Fernando O; Williams, Steven C R; Howard, Matthew A

    2013-01-01

    Arterial spin labelling (ASL) is increasingly being applied to study the cerebral response to pain in both experimental human models and patients with persistent pain. Despite its advantages, scanning time and reliability remain important issues in the clinical applicability of ASL. Here we present the test-retest analysis of concurrent pseudo-continuous ASL (pCASL) and visual analogue scale (VAS), in a clinical model of on-going pain following third molar extraction (TME). Using ICC performance measures, we were able to quantify the reliability of the post-surgical pain state and ΔCBF (change in CBF), both at the group and individual case level. Within-subject, the inter- and intra-session reliability of the post-surgical pain state was ranked good-to-excellent (ICC > 0.6) across both pCASL and VAS modalities. The parameter ΔCBF (change in CBF between pre- and post-surgical states) performed reliably (ICC > 0.4), provided that a single baseline condition (or the mean of more than one baseline) was used for subtraction. Between-subjects, the pCASL measurements in the post-surgical pain state and ΔCBF were both characterised as reliable (ICC > 0.4). However, the subjective VAS pain ratings demonstrated a significant contribution of pain state variability, which suggests diminished utility for interindividual comparisons. These analyses indicate that the pCASL imaging technique has considerable potential for the comparison of within- and between-subjects differences associated with pain-induced state changes and baseline differences in regional CBF. They also suggest that differences in baseline perfusion and functional lateralisation characteristics may play an important role in the overall reliability of the estimated changes in CBF. Repeated measures designs have the important advantage that they provide good reliability for comparing condition effects because all sources of variability between subjects are excluded from the experimental error. The ability to elicit reliable neural correlates of on-going pain using quantitative perfusion imaging may help support the conclusions derived from subjective self-report.

  17. The 747 primary flight control systems reliability and maintenance study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  18. A Comparative Magnetic Resonance Imaging Study of the Anatomy, Variability, and Asymmetry of Broca's Area in the Human and Chimpanzee Brain

    PubMed Central

    Keller, Simon S.; Roberts, Neil; Hopkins, William

    2009-01-01

    The frontal operculum—classically considered to be Broca's area—has special significance and interest in clinical, cognitive, and comparative neuroscience given its role in spoken language and the long-held assumption that structural asymmetry of this region of cortex may be related to functional lateralization of human language. We performed a detailed morphological and morphometric analysis of this area of the brain in humans and chimpanzees using identical image acquisition parameters, image analysis techniques, and consistent anatomical boundaries in both species. We report great inter-individual variability of the sulcal contours defining the operculum in both species, particularly discontinuity of the inferior frontal sulcus in humans and bifurcation of the inferior precentral sulcus in chimpanzees. There was no evidence of population-based asymmetry of the frontal opercular gray matter in humans or chimpanzees. The diagonal sulcus was only identified in humans, and its presence was significantly (F = 12.782, p < 0.001) associated with total volume of the ipsilateral operculum. The findings presented here suggest that there is no population-based interhemispheric macroscopic asymmetry of Broca's area in humans or Broca's area homolog in chimpanzees. However, given that previous studies have reported asymmetry in the cytoarchitectonic fields considered to represent Broca's area—which is important given that cytoarchitectonic boundaries are more closely related to the regional functional properties of cortex relative to sulcal landmarks—it may be that the gross morphology of the frontal operculum is not a reliable indicator of Broca's area per se. PMID:19923293

  19. Natural and human-induced terrestrial water storage change: A global analysis using hydrological models and GRACE

    NASA Astrophysics Data System (ADS)

    Felfelani, Farshid; Wada, Yoshihide; Longuevergne, Laurent; Pokhrel, Yadu N.

    2017-10-01

    Hydrological models and the data derived from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have been widely used to study the variations in terrestrial water storage (TWS) over large regions. However, both GRACE products and model results suffer from inherent uncertainties, calling for the need to make a combined use of GRACE and models to examine the variations in total TWS and their individual components, especially in relation to natural and human-induced changes in the terrestrial water cycle. In this study, we use the results from two state-of-the-art hydrological models and different GRACE spherical harmonic products to examine the variations in TWS and its individual components, and to attribute the changes to natural and human-induced factors over large global river basins. Analysis of the spatial patterns of the long-term trend in TWS from the two models and GRACE suggests that both models capture the GRACE-measured direction of change, but differ from GRACE as well as each other in terms of the magnitude over different regions. A detailed analysis of the seasonal cycle of TWS variations over 30 river basins shows notable differences not only between models and GRACE but also among different GRACE products and between the two models. Further, it is found that while one model performs well in highly-managed river basins, it fails to reproduce the GRACE-observed signal in snow-dominated regions, and vice versa. The isolation of natural and human-induced changes in TWS in some of the managed basins reveals a consistently declining TWS trend during 2002-2010, however; significant differences are again obvious both between GRACE and models and among different GRACE products and models. Results from the decomposition of the TWS signal into the general trend and seasonality indicate that both models do not adequately capture both the trend and seasonality in the managed or snow-dominated basins implying that the TWS variations from a single model cannot be reliably used for all global regions. It is also found that the uncertainties arising from climate forcing datasets can introduce significant additional uncertainties, making direct comparison of model results and GRACE products even more difficult. Our results highlight the need to further improve the representation of human land-water management and snow processes in large-scale models to enable a reliable use of models and GRACE to study the changes in freshwater systems in all global regions.

  20. Reliability of reflectance measures in passive filters

    NASA Astrophysics Data System (ADS)

    Saldiva de André, Carmen Diva; Afonso de André, Paulo; Rocha, Francisco Marcelo; Saldiva, Paulo Hilário Nascimento; Carvalho de Oliveira, Regiani; Singer, Julio M.

    2014-08-01

    Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. The intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. The results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is km = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding km = 0.56 for these occasions and km = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion.

Top