On modeling human reliability in space flights - Redundancy and recovery operations
NASA Astrophysics Data System (ADS)
Aarset, M.; Wright, J. F.
The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Culture Representation in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gertman; Julie Marble; Steven Novack
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less
Human Reliability Analysis in Support of Risk Assessment for Positive Train Control
DOT National Transportation Integrated Search
2003-06-01
This report describes an approach to evaluating the reliability of human actions that are modeled in a probabilistic risk assessment : (PRA) of train control operations. This approach to human reliability analysis (HRA) has been applied in the case o...
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Method of Testing and Predicting Failures of Electronic Mechanical Systems
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, Frances A.
1996-01-01
A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.
Reliability Analysis and Standardization of Spacecraft Command Generation Processes
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Grenander, Sven; Evensen, Ken
2011-01-01
center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Rasmussen, Martin
2016-06-01
This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: •more » Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.« less
Tsao, Liuxing; Ma, Liang
2016-11-01
Digital human modelling enables ergonomists and designers to consider ergonomic concerns and design alternatives in a timely and cost-efficient manner in the early stages of design. However, the reliability of the simulation could be limited due to the percentile-based approach used in constructing the digital human model. To enhance the accuracy of the size and shape of the models, we proposed a framework to generate digital human models using three-dimensional (3D) anthropometric data. The 3D scan data from specific subjects' hands were segmented based on the estimated centres of rotation. The segments were then driven in forward kinematics to perform several functional postures. The constructed hand models were then verified, thereby validating the feasibility of the framework. The proposed framework helps generate accurate subject-specific digital human models, which can be utilised to guide product design and workspace arrangement. Practitioner Summary: Subject-specific digital human models can be constructed under the proposed framework based on three-dimensional (3D) anthropometry. This approach enables more reliable digital human simulation to guide product design and workspace arrangement.
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
Human Reliability and Ship Stability
2003-07-04
models such as Miller (1957) and Broadbent (1959) is the idea of human beings as limited capacity information processors with constraints on...15 4.2.2 Outline of Some Key models ...23 TABLE 11: GENERIC ERROR MODELING SYSTEM
The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks
NASA Technical Reports Server (NTRS)
Hamlin, Teri L.
2010-01-01
HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
The flaws and human harms of animal experimentation.
Akhtar, Aysha
2015-10-01
Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.
Reliable models for assessing human exposures are important for understanding health risks from chemicals. The Stochastic Human Exposure and Dose Simulation model for multimedia, multi-route/pathway chemicals (SHEDS-Multimedia), developed by EPA’s Office of Research and Developm...
Modeling human disease using organotypic cultures.
Schweiger, Pawel J; Jensen, Kim B
2016-12-01
Reliable disease models are needed in order to improve quality of healthcare. This includes gaining better understanding of disease mechanisms, developing new therapeutic interventions and personalizing treatment. Up-to-date, the majority of our knowledge about disease states comes from in vivo animal models and in vitro cell culture systems. However, it has been exceedingly difficult to model disease at the tissue level. Since recently, the gap between cell line studies and in vivo modeling has been narrowing thanks to progress in biomaterials and stem cell research. Development of reliable 3D culture systems has enabled a rapid expansion of sophisticated in vitro models. Here we focus on some of the latest advances and future perspectives in 3D organoids for human disease modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Monkeys and humans take local uncertainty into account when localizing a change.
Devkar, Deepna; Wright, Anthony A; Ma, Wei Ji
2017-09-01
Since sensory measurements are noisy, an observer is rarely certain about the identity of a stimulus. In visual perception tasks, observers generally take their uncertainty about a stimulus into account when doing so helps task performance. Whether the same holds in visual working memory tasks is largely unknown. Ten human and two monkey subjects localized a single change in orientation between a sample display containing three ellipses and a test display containing two ellipses. To manipulate uncertainty, we varied the reliability of orientation information by making each ellipse more or less elongated (two levels); reliability was independent across the stimuli. In both species, a variable-precision encoding model equipped with an "uncertainty-indifferent" decision rule, which uses only the noisy memories, fitted the data poorly. In both species, a much better fit was provided by a model in which the observer also takes the levels of reliability-driven uncertainty associated with the memories into account. In particular, a measured change in a low-reliability stimulus was given lower weight than the same change in a high-reliability stimulus. We did not find strong evidence that observers took reliability-independent variations in uncertainty into account. Our results illustrate the importance of studying the decision stage in comparison tasks and provide further evidence for evolutionary continuity of working memory systems between monkeys and humans.
Monkeys and humans take local uncertainty into account when localizing a change
Devkar, Deepna; Wright, Anthony A.; Ma, Wei Ji
2017-01-01
Since sensory measurements are noisy, an observer is rarely certain about the identity of a stimulus. In visual perception tasks, observers generally take their uncertainty about a stimulus into account when doing so helps task performance. Whether the same holds in visual working memory tasks is largely unknown. Ten human and two monkey subjects localized a single change in orientation between a sample display containing three ellipses and a test display containing two ellipses. To manipulate uncertainty, we varied the reliability of orientation information by making each ellipse more or less elongated (two levels); reliability was independent across the stimuli. In both species, a variable-precision encoding model equipped with an “uncertainty–indifferent” decision rule, which uses only the noisy memories, fitted the data poorly. In both species, a much better fit was provided by a model in which the observer also takes the levels of reliability-driven uncertainty associated with the memories into account. In particular, a measured change in a low-reliability stimulus was given lower weight than the same change in a high-reliability stimulus. We did not find strong evidence that observers took reliability-independent variations in uncertainty into account. Our results illustrate the importance of studying the decision stage in comparison tasks and provide further evidence for evolutionary continuity of working memory systems between monkeys and humans. PMID:28877535
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
The Use Of Computational Human Performance Modeling As Task Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacuqes Hugo; David Gertman
2012-07-01
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring
The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.
2010-03-01
The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
From bedside to bench and back again: research issues in animal models of human disease.
Tkacs, Nancy C; Thompson, Hilaire J
2006-07-01
To improve outcomes for patients with many serious clinical problems, multifactorial research approaches by nurse scientists, including the use of animal models, are necessary. Animal models serve as analogies for clinical problems seen in humans and must meet certain criteria, including validity and reliability, to be useful in moving research efforts forward. This article describes research considerations in the development of rodent models. As the standard of diabetes care evolves to emphasize intensive insulin therapy, rates of severe hypoglycemia are increasing among patients with type 1 and type 2 diabetes mellitus. A consequence of this change in clinical practice is an increase in rates of two hypoglycemia-related diabetes complications: hypoglycemia-associated autonomic failure (HAAF) and resulting hypoglycemia unawareness. Work on an animal model of HAAF is in an early developmental stage, with several labs reporting different approaches to model this complication of type 1 diabetes mellitus. This emerging model serves as an example illustrating how evaluation of validity and reliability is critically important at each stage of developing and testing animal models to support inquiry into human disease.
Go, Kristina L; Delitto, Daniel; Judge, Sarah M; Gerber, Michael H; George, Thomas J; Behrns, Kevin E; Hughes, Steven J; Judge, Andrew R; Trevino, Jose G
2017-07-01
Limitations associated with current animal models serve as a major obstacle to reliable preclinical evaluation of therapies in pancreatic cancer (PC). In an effort to develop more reliable preclinical models, we have recently established a subcutaneous patient-derived xenograft (PDX) model. However, critical aspects of PC responsible for its highly lethal nature, such as the development of distant metastasis and cancer cachexia, remain underrepresented in the flank PDX model. The purpose of this study was to evaluate the degree to which an orthotopic PDX model of PC recapitulates these aspects of the human disease. Human PDX-derived PC tumors were implanted directly into the pancreas of NOD.Cg-Prkdc Il2rg/SzJ mice. Tumor growth, metastasis, and muscle wasting were then evaluated. Orthotopically implanted PDX-derived tumors consistently incorporated into the murine pancreatic parenchyma, metastasized to both the liver and lungs and induced muscle wasting directly proportional to the size of the tumor, consistent of the cancer cachexia syndrome. Through the orthotopic implantation technique described, we demonstrate a highly reproducible model that recapitulates both local and systemic aspects of human PC.
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Dasgupta, Gargi; BenMohamed, Lbachir
2011-01-01
Herpes simplex virus type 1 and type 2 (HSV-1 and HSV-2) -specific CD8+ T cells that reside in sensory ganglia, appears to control recurrent herpetic disease by aborting or reducing spontaneous and sporadic reactivations of latent virus. A reliable animal model is the ultimate key factor to test the efficacy of therapeutic vaccines that boost the level and the quality of sensory ganglia-resident CD8+ T cells against spontaneous herpes reactivation from sensory neurons, yet its relevance has been often overlooked. Herpes vaccinologists are hesitant about using mouse as a model in pre-clinical development of therapeutic vaccines because they do not adequately mimic spontaneous viral shedding or recurrent symptomatic diseases, as occurs in human. Alternatives to mouse models are rabbits and guinea pigs in which reactivation arise spontaneously with clinical features relevant to human disease. However, while rabbits and guinea pigs develop spontaneous HSV reactivation and recurrent ocular and genital disease none of them can mount CD8+ T cell responses specific to Human Leukocyte Antigen- (HLA-) restricted epitopes. In this review, we discuss the advantages and limitations of these animal models and describe a novel “humanized” HLA transgenic rabbit, which shows spontaneous HSV-1 reactivation, recurrent ocular disease and mounts CD8+ T cell responses to HLA-restricted epitopes. Adequate investments are needed to develop reliable preclinical animal models, such as HLA class I and class II double transgenic rabbits and guinea pigs to balance the ethical and financial concerns associated with the rising number of unsuccessful clinical trials for therapeutic vaccine formulations tested in unreliable mouse models. PMID:21718746
The Application of Humanized Mouse Models for the Study of Human Exclusive Viruses.
Vahedi, Fatemeh; Giles, Elizabeth C; Ashkar, Ali A
2017-01-01
The symbiosis between humans and viruses has allowed human tropic pathogens to evolve intricate means of modulating the human immune response to ensure its survival among the human population. In doing so, these viruses have developed profound mechanisms that mesh closely with our human biology. The establishment of this intimate relationship has created a species-specific barrier to infection, restricting the virus-associated pathologies to humans. This specificity diminishes the utility of traditional animal models. Humanized mice offer a model unique to all other means of study, providing an in vivo platform for the careful examination of human tropic viruses and their interaction with human cells and tissues. These types of animal models have provided a reliable medium for the study of human-virus interactions, a relationship that could otherwise not be investigated without questionable relevance to humans.
The Challenges of Credible Thermal Protection System Reliability Quantification
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2013-01-01
The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology.
Koch, Julian; Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
Laidoune, Abdelbaki; Rahal Gharbi, Med El Hadi
2016-09-01
The influence of sociocultural factors on human reliability within an open sociotechnical systems is highlighted. The design of such systems is enhanced by experience feedback. The study was focused on a survey related to the observation of working cases, and by processing of incident/accident statistics and semistructured interviews in the qualitative part. In order to consolidate the study approach, we considered a schedule for the purpose of standard statistical measurements. We tried to be unbiased by supporting an exhaustive list of all worker categories including age, sex, educational level, prescribed task, accountability level, etc. The survey was reinforced by a schedule distributed to 300 workers belonging to two oil companies. This schedule comprises 30 items related to six main factors that influence human reliability. Qualitative observations and schedule data processing had shown that the sociocultural factors can negatively and positively influence operator behaviors. The explored sociocultural factors influence the human reliability both in qualitative and quantitative manners. The proposed model shows how reliability can be enhanced by some measures such as experience feedback based on, for example, safety improvements, training, and information. With that is added the continuous systems improvements to improve sociocultural reality and to reduce negative behaviors.
Cheung, Connie; Gonzalez, Frank J
2008-01-01
Cytochrome P450s (P450s) are important enzymes involved in the metabolism of xenobiotics, particularly clinically used drugs, and are also responsible for metabolic activation of chemical carcinogens and toxins. Many xenobiotics can activate nuclear receptors that in turn induce the expression of genes encoding xenobiotic metabolizing enzymes and drug transporters. Marked species differences in the expression and regulation of cytochromes P450 and xenobiotic nuclear receptors exist. Thus obtaining reliable rodent models to accurately reflect human drug and carcinogen metabolism is severely limited. Humanized transgenic mice were developed in an effort to create more reliable in vivo systems to study and predict human responses to xenobiotics. Human P450s or human xenobiotic-activated nuclear receptors were introduced directly or replaced the corresponding mouse gene, thus creating “humanized” transgenic mice. Mice expressing human CYP1A1/CYP1A2, CYP2E1, CYP2D6, CYP3A4, CY3A7, PXR, PPARα were generated and characterized. These humanized mouse models offers a broad utility in the evaluation and prediction of toxicological risk that may aid in the development of safer drugs. PMID:18682571
Issues in benchmarking human reliability analysis methods : a literature review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
A question driven socio-hydrological modeling process
NASA Astrophysics Data System (ADS)
Garcia, M.; Portney, K.; Islam, S.
2016-01-01
Human and hydrological systems are coupled: human activity impacts the hydrological cycle and hydrological conditions can, but do not always, trigger changes in human systems. Traditional modeling approaches with no feedback between hydrological and human systems typically cannot offer insight into how different patterns of natural variability or human-induced changes may propagate through this coupled system. Modeling of coupled human-hydrological systems, also called socio-hydrological systems, recognizes the potential for humans to transform hydrological systems and for hydrological conditions to influence human behavior. However, this coupling introduces new challenges and existing literature does not offer clear guidance regarding model conceptualization. There are no universally accepted laws of human behavior as there are for the physical systems; furthermore, a shared understanding of important processes within the field is often used to develop hydrological models, but there is no such consensus on the relevant processes in socio-hydrological systems. Here we present a question driven process to address these challenges. Such an approach allows modeling structure, scope and detail to remain contingent on and adaptive to the question context. We demonstrate the utility of this process by revisiting a classic question in water resources engineering on reservoir operation rules: what is the impact of reservoir operation policy on the reliability of water supply for a growing city? Our example model couples hydrological and human systems by linking the rate of demand decreases to the past reliability to compare standard operating policy (SOP) with hedging policy (HP). The model shows that reservoir storage acts both as a buffer for variability and as a delay triggering oscillations around a sustainable level of demand. HP reduces the threshold for action thereby decreasing the delay and the oscillation effect. As a result, per capita demand decreases during periods of water stress are more frequent but less drastic and the additive effect of small adjustments decreases the tendency of the system to overshoot available supplies. This distinction between the two policies was not apparent using a traditional noncoupled model.
Carlisle, Daren M.; Wolock, David M.; Howard, Jeannette K.; Grantham, Theodore E.; Fesenmyer, Kurt; Wieczorek, Michael
2016-12-12
Because natural patterns of streamflow are a fundamental property of the health of streams, there is a critical need to quantify the degree to which human activities have modified natural streamflows. A requirement for assessing streamflow modification in a given stream is a reliable estimate of flows expected in the absence of human influences. Although there are many techniques to predict streamflows in specific river basins, there is a lack of approaches for making predictions of natural conditions across large regions and over many decades. In this study conducted by the U.S. Geological Survey, in cooperation with The Nature Conservancy and Trout Unlimited, the primary objective was to develop empirical models that predict natural (that is, unaffected by land use or water management) monthly streamflows from 1950 to 2012 for all stream segments in California. Models were developed using measured streamflow data from the existing network of streams where daily flow monitoring occurs, but where the drainage basins have minimal human influences. Widely available data on monthly weather conditions and the physical attributes of river basins were used as predictor variables. Performance of regional-scale models was comparable to that of published mechanistic models for specific river basins, indicating the models can be reliably used to estimate natural monthly flows in most California streams. A second objective was to develop a model that predicts the likelihood that streams experience modified hydrology. New models were developed to predict modified streamflows at 558 streamflow monitoring sites in California where human activities affect the hydrology, using basin-scale geospatial indicators of land use and water management. Performance of these models was less reliable than that for the natural-flow models, but results indicate the models could be used to provide a simple screening tool for identifying, across the State of California, which streams may be experiencing anthropogenic flow modification.
Helfert, S; Reimer, M; Barnscheid, L; Hüllemann, P; Rengelshausen, J; Keller, T; Baron, R; Binder, A
2018-05-14
Human experimental pain models in healthy subjects offer unique possibilities to study mechanisms of pain within a defined setting of expected pain symptoms, signs and mechanisms. Previous trials in healthy subjects demonstrated that topical application of 40% menthol is suitable to induce cold hyperalgesia. The objective of this study was to evaluate the impact of suggestion on this experimental human pain model. The study was performed within a single-centre, randomized, placebo-controlled, double-blind, two-period crossover trial in a cohort of 16 healthy subjects. Subjects were tested twice after topical menthol application (40% dissolved in ethanol) and twice after ethanol (as placebo) application. In the style of a balanced placebo trial design, the subjects received during half of the testing the correct information about the applied substance (topical menthol or ethanol) and during half of the testing the incorrect information, leading to four tested conditions (treatment conditions: menthol-told-menthol and menthol-told-ethanol; placebo conditions: ethanol-told-menthol and ethanol-told-ethanol). Cold but not mechanical hyperalgesia was reliably induced by the model. The cold pain threshold decreased in both treatment conditions regardless whether true or false information was given. Minor suggestion effects were found in subjects with prior ethanol application. The menthol model is a reliable, nonsuggestible model to induce cold hyperalgesia. Mechanical hyperalgesia is not as reliable to induce. Cold hyperalgesia may be investigated under unbiased and suggestion-free conditions using the menthol model of pain. © 2018 European Pain Federation - EFIC®.
Yao, X; Anderson, D L; Ross, S A; Lang, D G; Desai, B Z; Cooper, D C; Wheelan, P; McIntyre, M S; Bergquist, M L; MacKenzie, K I; Becherer, J D; Hashim, M A
2008-01-01
Background and purpose: Drug-induced prolongation of the QT interval can lead to torsade de pointes, a life-threatening ventricular arrhythmia. Finding appropriate assays from among the plethora of options available to predict reliably this serious adverse effect in humans remains a challenging issue for the discovery and development of drugs. The purpose of the present study was to develop and verify a reliable and relatively simple approach for assessing, during preclinical development, the propensity of drugs to prolong the QT interval in humans. Experimental approach: Sixteen marketed drugs from various pharmacological classes with a known incidence—or lack thereof—of QT prolongation in humans were examined in hERG (human ether a-go-go-related gene) patch-clamp assay and an anaesthetized guinea-pig assay for QT prolongation using specific protocols. Drug concentrations in perfusates from hERG assays and plasma samples from guinea-pigs were determined using liquid chromatography-mass spectrometry. Key results: Various pharmacological agents that inhibit hERG currents prolong the QT interval in anaesthetized guinea-pigs in a manner similar to that seen in humans and at comparable drug exposures. Several compounds not associated with QT prolongation in humans failed to prolong the QT interval in this model. Conclusions and implications: Analysis of hERG inhibitory potency in conjunction with drug exposures and QT interval measurements in anaesthetized guinea-pigs can reliably predict, during preclinical drug development, the risk of human QT prolongation. A strategy is proposed for mitigating the risk of QT prolongation of new chemical entities during early lead optimization. PMID:18587422
2011-03-21
throughout the experimental runs. Reliable and validated measures of anxiety ( Spielberger , 1983), as well as custom-constructed questionnaires about...Crowd modeling and simulation technologies. Transactions on modeling and computer simulation, 20(4). Spielberger , C. D. (1983
EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM)
ERDEM is a physiologically-based pharmacokinetic (PBPK) model with a graphical user interface (GUI) front end. Such a mathematical model was needed to make reliable estimates of the chemical dose to organs of animals or humans because of uncertainties of making route-to route, lo...
Multi-Unit Considerations for Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
St. Germain, S.; Boring, R.; Banaseanu, G.
This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact themore » selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.« less
Human neuron-astrocyte 3D co-culture-based assay for evaluation of neuroprotective compounds.
Terrasso, Ana Paula; Silva, Ana Carina; Filipe, Augusto; Pedroso, Pedro; Ferreira, Ana Lúcia; Alves, Paula Marques; Brito, Catarina
Central nervous system drug development has registered high attrition rates, mainly due to the lack of efficacy of drug candidates, highlighting the low reliability of the models used in early-stage drug development and the need for new in vitro human cell-based models and assays to accurately identify and validate drug candidates. 3D human cell models can include different tissue cell types and represent the spatiotemporal context of the original tissue (co-cultures), allowing the establishment of biologically-relevant cell-cell and cell-extracellular matrix interactions. Nevertheless, exploitation of these 3D models for neuroprotection assessment has been limited due to the lack of data to validate such 3D co-culture approaches. In this work we combined a 3D human neuron-astrocyte co-culture with a cell viability endpoint for the implementation of a novel in vitro neuroprotection assay, over an oxidative insult. Neuroprotection assay robustness and specificity, and the applicability of Presto Blue, MTT and CytoTox-Glo viability assays to the 3D co-culture were evaluated. Presto Blue was the adequate endpoint as it is non-destructive and is a simpler and reliable assay. Semi-automation of the cell viability endpoint was performed, indicating that the assay setup is amenable to be transferred to automated screening platforms. Finally, the neuroprotection assay setup was applied to a series of 36 test compounds and several candidates with higher neuroprotective effect than the positive control, Idebenone, were identified. The robustness and simplicity of the implemented neuroprotection assay with the cell viability endpoint enables the use of more complex and reliable 3D in vitro cell models to identify and validate drug candidates. Copyright © 2016 Elsevier Inc. All rights reserved.
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology
Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics. PMID:28558050
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.
1982-01-01
Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2018-03-01
This paper draws on the "human reliability" concept as a structure for gaining insight into the maintenance workforce assessment in a process industry. Human reliability hinges on developing the reliability of humans to a threshold that guides the maintenance workforce to execute accurate decisions within the limits of resources and time allocations. This concept offers a worthwhile point of deviation to encompass three elegant adjustments to literature model in terms of maintenance time, workforce performance and return-on-workforce investments. These fully explain the results of our influence. The presented structure breaks new grounds in maintenance workforce theory and practice from a number of perspectives. First, we have successfully implemented fuzzy goal programming (FGP) and differential evolution (DE) techniques for the solution of optimisation problem in maintenance of a process plant for the first time. The results obtained in this work showed better quality of solution from the DE algorithm compared with those of genetic algorithm and particle swarm optimisation algorithm, thus expressing superiority of the proposed procedure over them. Second, the analytical discourse, which was framed on stochastic theory, focusing on specific application to a process plant in Nigeria is a novelty. The work provides more insights into maintenance workforce planning during overhaul rework and overtime maintenance activities in manufacturing systems and demonstrated capacity in generating substantially helpful information for practice.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
Leptin- and Leptin Receptor-Deficient Rodent Models: Relevance for Human Type 2 Diabetes
Wang, Bingxuan; P., Charukeshi Chandrasekera; Pippin, John J.
2014-01-01
Among the most widely used animal models in obesity-induced type 2 diabetes mellitus (T2DM) research are the congenital leptin- and leptin receptor-deficient rodent models. These include the leptin-deficient ob/ob mice and the leptin receptor-deficient db/db mice, Zucker fatty rats, Zucker diabetic fatty rats, SHR/N-cp rats, and JCR:LA-cp rats. After decades of mechanistic and therapeutic research schemes with these animal models, many species differences have been uncovered, but researchers continue to overlook these differences, leading to untranslatable research. The purpose of this review is to analyze and comprehensively recapitulate the most common leptin/leptin receptor-based animal models with respect to their relevance and translatability to human T2DM. Our analysis revealed that, although these rodents develop obesity due to hyperphagia caused by abnormal leptin/leptin receptor signaling with the subsequent appearance of T2DM-like manifestations, these are in fact secondary to genetic mutations that do not reflect disease etiology in humans, for whom leptin or leptin receptor deficiency is not an important contributor to T2DM. A detailed comparison of the roles of genetic susceptibility, obesity, hyperglycemia, hyperinsulinemia, insulin resistance, and diabetic complications as well as leptin expression, signaling, and other factors that confound translation are presented here. There are substantial differences between these animal models and human T2DM that limit reliable, reproducible, and translatable insight into human T2DM. Therefore, it is imperative that researchers recognize and acknowledge the limitations of the leptin/leptin receptor-based rodent models and invest in research methods that would be directly and reliably applicable to humans in order to advance T2DM management. PMID:24809394
Leptin- and leptin receptor-deficient rodent models: relevance for human type 2 diabetes.
Wang, Bingxuan; Chandrasekera, P Charukeshi; Pippin, John J
2014-03-01
Among the most widely used animal models in obesity-induced type 2 diabetes mellitus (T2DM) research are the congenital leptin- and leptin receptor-deficient rodent models. These include the leptin-deficient ob/ob mice and the leptin receptor-deficient db/db mice, Zucker fatty rats, Zucker diabetic fatty rats, SHR/N-cp rats, and JCR:LA-cp rats. After decades of mechanistic and therapeutic research schemes with these animal models, many species differences have been uncovered, but researchers continue to overlook these differences, leading to untranslatable research. The purpose of this review is to analyze and comprehensively recapitulate the most common leptin/leptin receptor-based animal models with respect to their relevance and translatability to human T2DM. Our analysis revealed that, although these rodents develop obesity due to hyperphagia caused by abnormal leptin/leptin receptor signaling with the subsequent appearance of T2DM-like manifestations, these are in fact secondary to genetic mutations that do not reflect disease etiology in humans, for whom leptin or leptin receptor deficiency is not an important contributor to T2DM. A detailed comparison of the roles of genetic susceptibility, obesity, hyperglycemia, hyperinsulinemia, insulin resistance, and diabetic complications as well as leptin expression, signaling, and other factors that confound translation are presented here. There are substantial differences between these animal models and human T2DM that limit reliable, reproducible, and translatable insight into human T2DM. Therefore, it is imperative that researchers recognize and acknowledge the limitations of the leptin/leptin receptor- based rodent models and invest in research methods that would be directly and reliably applicable to humans in order to advance T2DM management.
Savage, Trevor Nicholas; McIntosh, Andrew Stuart
2017-03-01
It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.
Heparin-based hydrogels induce human renal tubulogenesis in vitro.
Weber, Heather M; Tsurkan, Mikhail V; Magno, Valentina; Freudenberg, Uwe; Werner, Carsten
2017-07-15
Dialysis or kidney transplantation is the only therapeutic option for end stage renal disease. Accordingly, there is a large unmet clinical need for new causative therapeutic treatments. Obtaining robust models that mimic the complex nature of the human kidney is a critical step in the development of new therapeutic strategies. Here we establish a synthetic in vitro human renal tubulogenesis model based on a tunable glycosaminoglycan-hydrogel platform. In this system, renal tubulogenesis can be modulated by the adjustment of hydrogel mechanics and degradability, growth factor signaling, and the presence of insoluble adhesion cues, potentially providing new insights for regenerative therapy. Different hydrogel properties were systematically investigated for their ability to regulate renal tubulogenesis. Hydrogels based on heparin and matrix metalloproteinase cleavable peptide linker units were found to induce the morphogenesis of single human proximal tubule epithelial cells into physiologically sized tubule structures. The generated tubules display polarization markers, extracellular matrix components, and organic anion transport functions of the in vivo renal proximal tubule and respond to nephrotoxins comparable to the human clinical response. The established hydrogel-based human renal tubulogenesis model is thus considered highly valuable for renal regenerative medicine and personalized nephrotoxicity studies. The only cure for end stage kidney disease is kidney transplantation. Hence, there is a huge need for reliable human kidney models to study renal regeneration and establish alternative treatments. Here we show the development and application of an in vitro human renal tubulogenesis model using heparin-based hydrogels. To the best of our knowledge, this is the first system where human renal tubulogenesis can be monitored from single cells to physiologically sized tubule structures in a tunable hydrogel system. To validate the efficacy of our model as a drug toxicity platform, a chemotherapy drug was incubated with the model, resulting in a drug response similar to human clinical pathology. The established model could have wide applications in the field of nephrotoxicity and renal regenerative medicine and offer a reliable alternative to animal models. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
..., EPRI/NRC- RES Fire Human Reliability Analysis Guidelines, Draft Report for Comment AGENCY: Nuclear... Human Reliability Analysis Guidelines, Draft Report for Comment'' (December 11, 2009; 74 FR 65810). This... Human Reliability Analysis Guidelines'' is available electronically under ADAMS Accession Number...
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
The rabbit as a model for studying lung disease and stem cell therapy.
Kamaruzaman, Nurfatin Asyikhin; Kardia, Egi; Kamaldin, Nurulain 'Atikah; Latahir, Ahmad Zaeri; Yahaya, Badrul Hisham
2013-01-01
No single animal model can reproduce all of the human features of both acute and chronic lung diseases. However, the rabbit is a reliable model and clinically relevant facsimile of human disease. The similarities between rabbits and humans in terms of airway anatomy and responses to inflammatory mediators highlight the value of this species in the investigation of lung disease pathophysiology and in the development of therapeutic agents. The inflammatory responses shown by the rabbit model, especially in the case of asthma, are comparable with those that occur in humans. The allergic rabbit model has been used extensively in drug screening tests, and this model and humans appear to be sensitive to similar drugs. In addition, recent studies have shown that the rabbit serves as a good platform for cell delivery for the purpose of stem-cell-based therapy.
The Rabbit as a Model for Studying Lung Disease and Stem Cell Therapy
Kamaruzaman, Nurfatin Asyikhin; Kamaldin, Nurulain ‘Atikah; Latahir, Ahmad Zaeri; Yahaya, Badrul Hisham
2013-01-01
No single animal model can reproduce all of the human features of both acute and chronic lung diseases. However, the rabbit is a reliable model and clinically relevant facsimile of human disease. The similarities between rabbits and humans in terms of airway anatomy and responses to inflammatory mediators highlight the value of this species in the investigation of lung disease pathophysiology and in the development of therapeutic agents. The inflammatory responses shown by the rabbit model, especially in the case of asthma, are comparable with those that occur in humans. The allergic rabbit model has been used extensively in drug screening tests, and this model and humans appear to be sensitive to similar drugs. In addition, recent studies have shown that the rabbit serves as a good platform for cell delivery for the purpose of stem-cell-based therapy. PMID:23653896
A Holistic Approach to Systems Development
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
2008-01-01
Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9
King, James E; Weiss, Alexander; Sisco, Melissa M
2008-11-01
Ratings of 202 chimpanzees on 43 personality descriptor adjectives were used to calculate scores on five domains analogous to the human Five-Factor Model and a chimpanzee-specific Dominance domain. Male and female chimpanzees were divided into five age groups ranging from juvenile to old adult. Internal consistencies and interrater reliabilities of factors were stable across age groups and approximately 6.8 year retest reliabilities were high. Age-related declines in Extraversion and Openness and increases in Agreeableness and Conscientiousness paralleled human age differences. The mean change in absolute standardized units for all five factors was virtually identical in humans and chimpanzees after adjustment for different developmental rates. Consistent with their aggressive behavior in the wild, male chimpanzees were rated as more aggressive, emotional, and impulsive than females. Chimpanzee sex differences in personality were greater than comparable human gender differences. These findings suggest that chimpanzee and human personality develop via an unfolding maturational process. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. JOe; Ronald L. Boring
Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less
[Study of the relationship between human quality and reliability].
Long, S; Wang, C; Wang, L i; Yuan, J; Liu, H; Jiao, X
1997-02-01
To clarify the relationship between human quality and reliability, 1925 experiments in 20 subjects were carried out to study the relationship between disposition character, digital memory, graphic memory, multi-reaction time and education level and simulated aircraft operation. Meanwhile, effects of task difficulty and enviromental factor on human reliability were also studied. The results showed that human quality can be predicted and evaluated through experimental methods. The better the human quality, the higher the human reliability.
Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.
2018-01-01
Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394
Scanlon, Bridget R; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y; Müller Schmied, Hannes; van Beek, Ludovicus P H; Wiese, David N; Wada, Yoshihide; Long, Di; Reedy, Robert C; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F P
2018-02-06
Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002-2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤-0.5 km 3 /y) and increasing (≥0.5 km 3 /y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km 3 /y, whereas most models estimate decreasing trends (-71 to 11 km 3 /y). Land water storage trends, summed over all basins, are positive for GRACE (∼71-82 km 3 /y) but negative for models (-450 to -12 km 3 /y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. Copyright © 2018 the Author(s). Published by PNAS.
Accurate assessment of chronic human exposure to atmospheric criteria pollutants, such as ozone, is critical for understanding human health risks associated with living in environments with elevated ambient pollutant concentrations. In this study, we analyzed a data set from a...
Code of Federal Regulations, 2011 CFR
2011-01-01
... HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program General Provisions § 712.1 Purpose. This part establishes the policies and procedures for a Human Reliability Program... judgment and reliability may be impaired by physical or mental/personality disorders, alcohol abuse, use of...
Validation of the Turkish Cervical Cancer and Human Papilloma Virus Awareness Questionnaire.
Özdemir, E; Kısa, S
2016-09-01
The aim of this study was to determine the validity and reliability of the 'Cervical Cancer and Human Papilloma Virus Awareness Questionnaire' among fertility age women by adapting the scale into Turkish. Cervical cancer is the fourth most commonly form seen among women. Death from cervical cancer ranks third among causes and is one of the most preventable forms of cancer. This cross-sectional study included 360 women from three family health centres between January 5 and June 25, 2014. Internal consistency showed that the Kuder-Richardson 21 reliability coefficient in the first part was 0.60, Cronbach's alpha reliability coefficient was 0.61 in the second part. The Kaiser-Meyer-Olkin value of the items on the scale was 0.712. The Barlett test was significant. The confirmatory factor analysis indicated that the model matched the data adequately. This study shows that the Turkish version of the instrument is a valid and reliable tool to evaluate knowledge, perceptions and preventive behaviours of women regarding human papilloma virus and cervical cancer. Nurses who work in the clinical and primary care settings need to screen, detect and refer women who may be at risk from cervical cancer. © 2016 International Council of Nurses.
Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.
Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A
2016-03-01
Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.
Krejci, Caroline C; Stone, Richard T; Dorneich, Michael C; Gilbert, Stephen B
2016-02-01
Factors influencing long-term viability of an intermediated regional food supply network (food hub) were modeled using agent-based modeling techniques informed by interview data gathered from food hub participants. Previous analyses of food hub dynamics focused primarily on financial drivers rather than social factors and have not used mathematical models. Based on qualitative and quantitative data gathered from 22 customers and 11 vendors at a midwestern food hub, an agent-based model (ABM) was created with distinct consumer personas characterizing the range of consumer priorities. A comparison study determined if the ABM behaved differently than a model based on traditional economic assumptions. Further simulation studies assessed the effect of changes in parameters, such as producer reliability and the consumer profiles, on long-term food hub sustainability. The persona-based ABM model produced different and more resilient results than the more traditional way of modeling consumers. Reduced producer reliability significantly reduced trade; in some instances, a modest reduction in reliability threatened the sustainability of the system. Finally, a modest increase in price-driven consumers at the outset of the simulation quickly resulted in those consumers becoming a majority of the overall customer base. Results suggest that social factors, such as desire to support the community, can be more important than financial factors. An ABM of food hub dynamics, based on human factors data gathered from the field, can be a useful tool for policy decisions. Similar approaches can be used for modeling customer dynamics with other sustainable organizations. © 2015, Human Factors and Ergonomics Society.
Optimization of life support systems and their systems reliability
NASA Technical Reports Server (NTRS)
Fan, L. T.; Hwang, C. L.; Erickson, L. E.
1971-01-01
The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.
Pediatric laryngeal simulator using 3D printed models: A novel technique.
Kavanagh, Katherine R; Cote, Valerie; Tsui, Yvonne; Kudernatsch, Simon; Peterson, Donald R; Valdez, Tulio A
2017-04-01
Simulation to acquire and test technical skills is an essential component of medical education and residency training in both surgical and nonsurgical specialties. High-quality simulation education relies on the availability, accessibility, and reliability of models. The objective of this work was to describe a practical pediatric laryngeal model for use in otolaryngology residency training. Ideally, this model would be low-cost, have tactile properties resembling human tissue, and be reliably reproducible. Pediatric laryngeal models were developed using two manufacturing methods: direct three-dimensional (3D) printing of anatomical models and casted anatomical models using 3D-printed molds. Polylactic acid, acrylonitrile butadiene styrene, and high-impact polystyrene (HIPS) were used for the directly printed models, whereas a silicone elastomer (SE) was used for the casted models. The models were evaluated for anatomic quality, ease of manipulation, hardness, and cost of production. A tissue likeness scale was created to validate the simulation model. Fleiss' Kappa rating was performed to evaluate interrater agreement, and analysis of variance was performed to evaluate differences among the materials. The SE provided the most anatomically accurate models, with the tactile properties allowing for surgical manipulation of the larynx. Direct 3D printing was more cost-effective than the SE casting method but did not possess the material properties and tissue likeness necessary for surgical simulation. The SE models of the pediatric larynx created from a casting method demonstrated high quality anatomy, tactile properties comparable to human tissue, and easy manipulation with standard surgical instruments. Their use in a reliable, low-cost, accessible, modular simulation system provides a valuable training resource for otolaryngology residents. N/A. Laryngoscope, 127:E132-E137, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Citizen science: A new perspective to evaluate spatial patterns in hydrology.
NASA Astrophysics Data System (ADS)
Koch, J.; Stisen, S.
2016-12-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning make humans often more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which is inevitable giving benefits such as speed and the possibility to automatize processes. This study highlights the integration of the generally underused human resource into hydrology. We established a citizen science project on the zooniverse platform entitled Pattern Perception. The aim is to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of a hydrological catchment model. In total, the turnout counts more than 2,800 users that provided over 46,000 classifications of 1,095 individual subjects within 64 days after the launch. Each subject displays simulated spatial patterns of land-surface variables of a baseline model and six modelling scenarios. The citizen science data discloses a numeric pattern similarity score for each of the scenarios with respect to the reference. We investigate the capability of a set of innovative statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide flexibility and auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric.
Guvenc, Gulten; Seven, Memnun; Akyuz, Aygul
2016-06-01
To adapt and psychometrically test the Health Belief Model Scale for Human Papilloma Virus (HPV) and Its Vaccination (HBMS-HPVV) for use in a Turkish population and to assess the Human Papilloma Virus Knowledge score (HPV-KS) among female college students. Instrument adaptation and psychometric testing study. The sample consisted of 302 nursing students at a nursing school in Turkey between April and May 2013. Questionnaire-based data were collected from the participants. Information regarding HBMS-HPVV and HPV knowledge and descriptive characteristic of participants was collected using translated HBMS-HPVV and HPV-KS. Test-retest reliability was evaluated and Cronbach α was used to assess internal consistency reliability, and exploratory factor analysis was used to assess construct validity of the HBMS-HPVV. The scale consists of 4 subscales that measure 4 constructs of the Health Belief Model covering the perceived susceptibility and severity of HPV and the benefits and barriers. The final 14-item scale had satisfactory validity and internal consistency. Cronbach α values for the 4 subscales ranged from 0.71 to 0.78. Total HPV-KS ranged from 0 to 8 (scale range, 0-10; 3.80 ± 2.12). The HBMS-HPVV is a valid and reliable instrument for measuring young Turkish women's beliefs and attitudes about HPV and its vaccination. Copyright © 2015 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Framework for Human-Automation Collaboration: Conclusions from Four Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna; Le Blanc, Katya L.; O'Hara, John
The Human Automation Collaboration (HAC) research project is investigating how advanced technologies that are planned for Advanced Small Modular Reactors (AdvSMR) will affect the performance and the reliability of the plant from a human factors and human performance perspective. The HAC research effort investigates the consequences of allocating functions between the operators and automated systems. More specifically, the research team is addressing how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. Oxstrand et al. (2013 - March) describes the efforts conductedmore » by the researchers to identify the research needs for HAC. The research team reviewed the literature on HAC, developed a model of HAC, and identified gaps in the existing knowledge of human-automation collaboration. As described in Oxstrand et al. (2013 – June), the team then prioritized the research topics identified based on the specific needs in the context of AdvSMR. The prioritization was based on two sources of input: 1) The preliminary functions and tasks, and 2) The model of HAC. As a result, three analytical studies were planned and conduced; 1) Models of Teamwork, 2) Standardized HAC Performance Measurement Battery, and 3) Initiators and Triggering Conditions for Adaptive Automation. Additionally, one field study was also conducted at Idaho Falls Power.« less
Designing automation for human use: empirical studies and quantitative models.
Parasuraman, R
2000-07-01
An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.
NDE reliability and probability of detection (POD) evolution and paradigm shift
NASA Astrophysics Data System (ADS)
Singh, Surendra
2014-02-01
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.
Decisional Information System for Safety (D.I.S.S.) Dedicated to the Human Space Exploration Mission
NASA Astrophysics Data System (ADS)
Grès, Stéphane; Guyonnet, Jean-François
2006-06-01
At the heart of the issue of reliable and dependable systems and networks, this paper presents the conception of a Decisional Information System for Security (D.I.S.S.) dedicated to the Human Space Exploration Mission. The objective is to conceive a decisional information system for human long duration space flight (> 1000 days) which is realised in entire autonomy in the solar system. This article describes the importance of the epistemological and ontological context for designing an open, self-learning and reliable system able for self-adapt in dangerous and unforeseen situations. We present in link with our research, the limits of the empirical analytical paradigm and several paths of research lead by the nascent paradigm of enaction. The strong presumption is that the centralised models of security could not be sufficient today to respond and challenge the security of a technical system, which will support human exploration missions.
Human Rating the Orion Parachute System
NASA Technical Reports Server (NTRS)
Machin, Ricardo A.; Fisher, Timothy E.; Evans, Carol T.; Stewart, Christine E.
2011-01-01
Human rating begins with design. Converging on the requirements and identifying the risks as early as possible in the design process is essential. Understanding of the interaction between the recovery system and the spacecraft will in large part dictate the achievable reliability of the final design. Component and complete system full-scale flight testing is critical to assure a realistic evaluation of the performance and reliability of the parachute system. However, because testing is so often difficult and expensive, comprehensive analysis of test results and correlation to accurate modeling completes the human rating process. The National Aeronautics and Space Administration (NASA) Orion program uses parachutes to stabilize and decelerate the Crew Exploration Vehicle (CEV) spacecraft during subsonic flight in order to deliver a safe water landing. This paper describes the approach that CEV Parachute Assembly System (CPAS) will take to human rate the parachute recovery system for the CEV.
Santiago, Gabriel F; Susarla, Srinivas M; Al Rakan, Mohammed; Coon, Devin; Rada, Erin M; Sarhane, Karim A; Shores, Jamie T; Bonawitz, Steven C; Cooney, Damon; Sacks, Justin; Murphy, Ryan J; Fishman, Elliot K; Brandacher, Gerald; Lee, W P Andrew; Liacouras, Peter; Grant, Gerald; Armand, Mehran; Gordon, Chad R
2014-05-01
Le Fort-based, maxillofacial allotransplantation is a reconstructive alternative gaining clinical acceptance. However, the vast majority of single-jaw transplant recipients demonstrate less-than-ideal skeletal and dental relationships, with suboptimal aesthetic harmony. The purpose of this study was to investigate reproducible cephalometric landmarks in a large-animal model, where refinement of computer-assisted planning, intraoperative navigational guidance, translational bone osteotomies, and comparative surgical techniques could be performed. Cephalometric landmarks that could be translated into the human craniomaxillofacial skeleton, and that would remain reliable following maxillofacial osteotomies with midfacial alloflap inset, were sought on six miniature swine. Le Fort I- and Le Fort III-based alloflaps were harvested in swine with osteotomies, and all alloflaps were either autoreplanted or transplanted. Cephalometric analyses were performed on lateral cephalograms preoperatively and postoperatively. Critical cephalometric data sets were identified with the assistance of surgical planning and virtual prediction software and evaluated for reliability and translational predictability. Several pertinent landmarks and human analogues were identified, including pronasale, zygion, parietale, gonion, gnathion, lower incisor base, and alveolare. Parietale-pronasale-alveolare and parietale-pronasale-lower incisor base were found to be reliable correlates of sellion-nasion-A point angle and sellion-nasion-B point angle measurements in humans, respectively. There is a set of reliable cephalometric landmarks and measurement angles pertinent for use within a translational large-animal model. These craniomaxillofacial landmarks will enable development of novel navigational software technology, improve cutting guide designs, and facilitate exploration of new avenues for investigation and collaboration.
Lessons Learned from Dependency Usage in HERA: Implications for THERP-Related HRA Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Ronald L. Boring; Harold S. Blackman
Dependency occurs when the probability of success or failure on one action changes the probability of success or failure on a subsequent action. Dependency may serve as a modifier on the human error probabilities (HEPs) for successive actions in human reliability analysis (HRA) models. Discretion should be employed when determining whether or not a dependency calculation is warranted: dependency should not be assigned without strongly grounded reasons. Human reliability analysts may sometimes assign dependency in cases where it is unwarranted. This inappropriate assignment is attributed to a lack of clear guidance to encompass the range of scenarios human reliability analystsmore » are addressing. Inappropriate assignment of dependency produces inappropriately elevated HEP values. Lessons learned about dependency usage in the Human Event Repository and Analysis (HERA) system may provide clarification and guidance for analysts using first-generation HRA methods. This paper presents the HERA approach to dependency assessment and discusses considerations for dependency usage in HRA, including the cognitive basis for dependency, direction for determining when dependency should be assessed, considerations for determining the dependency level, temporal issues to consider when assessing dependency, (e.g., considering task sequence versus overall event sequence, and dependency over long periods of time), and diagnosis and action influences on dependency.« less
Sarnadskiĭ, V N
2007-01-01
The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.
Noninvasive identification of the total peripheral resistance baroreflex
NASA Technical Reports Server (NTRS)
Mukkamala, Ramakrishna; Toska, Karin; Cohen, Richard J.
2003-01-01
We propose two identification algorithms for quantitating the total peripheral resistance (TPR) baroreflex, an important contributor to short-term arterial blood pressure (ABP) regulation. Each algorithm analyzes beat-to-beat fluctuations in ABP and cardiac output, which may both be obtained noninvasively in humans. For a theoretical evaluation, we applied both algorithms to a realistic cardiovascular model. The results contrasted with only one of the algorithms proving to be reliable. This algorithm was able to track changes in the static gains of both the arterial and cardiopulmonary TPR baroreflex. We then applied both algorithms to a preliminary set of human data and obtained contrasting results much like those obtained from the cardiovascular model, thereby making the theoretical evaluation results more meaningful. This study suggests that, with experimental testing, the reliable identification algorithm may provide a powerful, noninvasive means for quantitating the TPR baroreflex. This study also provides an example of the role that models can play in the development and initial evaluation of algorithms aimed at quantitating important physiological mechanisms.
Herbig, Michael E; Houdek, Pia; Gorissen, Sascha; Zorn-Kruppa, Michaela; Wladykowski, Ewa; Volksdorf, Thomas; Grzybowski, Stephan; Kolios, Georgios; Willers, Christoph; Mallwitz, Henning; Moll, Ingrid; Brandner, Johanna M
2015-09-01
Reliable models for the determination of skin penetration and permeation are important for the development of new drugs and formulations. The intention of our study was to develop a skin penetration model which (1) is viable and well supplied with nutrients during the period of the experiment (2) is mimicking human skin as far as possible, but still is independent from the problems of supply and heterogeneity, (3) can give information about the penetration into different compartments of the skin and (4) considers specific inter-individual differences in skin thickness. In addition, it should be quick and inexpensive (5) and without ethical implications (6). Using a chemically divers set of four topically approved active pharmaceutical ingredients (APIs), namely diclofenac, metronidazole, tazarotene, and terbinafine, we demonstrated that the model allows reliable determination of drug concentrations in different layers of the viable epidermis and dermis. For APIs susceptible for skin metabolism, the extent of metabolic transformation in epidermis and dermis can be monitored. Furthermore, a high degree of accordance in the ability for discrimination of skin concentrations of the substances in different layers was found in models derived from porcine and human skin. Viability, proliferation, differentiation and markers for skin barrier function were surveyed in the model. This model, which we call 'Hamburg model of skin penetration' is particularly suited to support a rational ranking and selection of dermatological formulations within drug development projects. Copyright © 2015 Elsevier B.V. All rights reserved.
Lievens, Filip; Sanchez, Juan I
2007-05-01
A quasi-experiment was conducted to investigate the effects of frame-of-reference training on the quality of competency modeling ratings made by consultants. Human resources consultants from a large consulting firm were randomly assigned to either a training or a control condition. The discriminant validity, interrater reliability, and accuracy of the competency ratings were significantly higher in the training group than in the control group. Further, the discriminant validity and interrater reliability of competency inferences were highest among an additional group of trained consultants who also had competency modeling experience. Together, these results suggest that procedural interventions such as rater training can significantly enhance the quality of competency modeling. 2007 APA, all rights reserved
Lim, Hooi Been; Baumann, Dirk; Li, Er-Ping
2011-03-01
Wireless body area network (WBAN) is a new enabling system with promising applications in areas such as remote health monitoring and interpersonal communication. Reliable and optimum design of a WBAN system relies on a good understanding and in-depth studies of the wave propagation around a human body. However, the human body is a very complex structure and is computationally demanding to model. This paper aims to investigate the effects of the numerical model's structure complexity and feature details on the simulation results. Depending on the application, a simplified numerical model that meets desired simulation accuracy can be employed for efficient simulations. Measurements of ultra wideband (UWB) signal propagation along a human arm are performed and compared to the simulation results obtained with numerical arm models of different complexity levels. The influence of the arm shape and size, as well as tissue composition and complexity is investigated.
Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.
Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta
2017-07-01
There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simplified human model and pedestrian simulation in the millimeter-wave region
NASA Astrophysics Data System (ADS)
Han, Junghwan; Kim, Seok; Lee, Tae-Yun; Ka, Min-Ho
2016-02-01
The 24 GHz and 77 GHz radar sensors have been studied as a strong candidate for advanced driver assistance systems(ADAS) because of their all-weather capability and accurate range and radial velocity measuring scheme. However, developing a reliable pedestrian recognition system hasmany obstacles due to the inaccurate and non-trivial radar responses at these high frequencies and the many combinations of clothes and accessories. To overcome these obstacles, many researchers used electromagnetic (EM) simulation to characterize the radar scattering response of a human. However, human simulation takes so long time because of the electrically huge size of a human in the millimeter-wave region. To reduce simulation time, some researchers assumed the skin of a human is the perfect electric conductor (PEC) and have simulated the PEC human model using physical optics (PO) algorithm without a specific explanation about how the human body could be modeled with PEC. In this study, the validity of the assumption that the surface of the human body is considered PEC in the EM simulation is verified, and the simulation result of the dry skin human model is compared with that of the PEC human model.
USDA-ARS?s Scientific Manuscript database
The Ogallala aquifer is the only reliable source of water in the southern High Plains (SHP) region of Texas, New Mexico and Oklahoma. Groundwater availability has fostered a strong agricultural economy that has a significant impact on global food security. Groundwater models that not only capture ...
Reliable, evaluated human exposure and dose models are important for understanding the health risks from chemicals. A case study focusing on permethrin was conducted because of this insecticide’s widespread use and potential health effects. SHEDS-Multimedia was applied to estimat...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
Peña, Estefania; Calvo, B; Martínez, M A; Martins, P; Mascarenhas, T; Jorge, R M N; Ferreira, A; Doblaré, M
2010-02-01
In this paper, the viscoelastic mechanical properties of vaginal tissue are investigated. Using previous results of the authors on the mechanical properties of biological soft tissues and newly experimental data from uniaxial tension tests, a new model for the viscoelastic mechanical properties of the human vaginal tissue is proposed. The structural model seems to be sufficiently accurate to guarantee its application to prediction of reliable stress distributions, and is suitable for finite element computations. The obtained results may be helpful in the design of surgical procedures with autologous tissue or prostheses.
Weighted integration of short-term memory and sensory signals in the oculomotor system.
Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe
2018-05-01
Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.
¹H MRS characterization of neurochemical profiles in orthotopic mouse models of human brain tumors.
Hulsey, Keith M; Mashimo, Tomoyuki; Banerjee, Abhishek; Soesbe, Todd C; Spence, Jeffrey S; Vemireddy, Vamsidhara; Maher, Elizabeth A; Bachoo, Robert M; Choi, Changho
2015-01-01
Glioblastoma (GBM), the most common primary brain tumor, is resistant to currently available treatments. The development of mouse models of human GBM has provided a tool for studying mechanisms involved in tumor initiation and growth as well as a platform for preclinical investigation of new drugs. In this study we used (1) H MR spectroscopy to study the neurochemical profile of a human orthotopic tumor (HOT) mouse model of human GBM. The goal of this study was to evaluate differences in metabolite concentrations in the GBM HOT mice when compared with normal mouse brain in order to determine if MRS could reliably differentiate tumor from normal brain. A TE =19 ms PRESS sequence at 9.4 T was used for measuring metabolite levels in 12 GBM mice and 8 healthy mice. Levels for 12 metabolites and for lipids/macromolecules at 0.9 ppm and at 1.3 ppm were reliably detected in all mouse spectra. The tumors had significantly lower concentrations of total creatine, GABA, glutamate, total N-acetylaspartate, aspartate, lipids/macromolecules at 0.9 ppm, and lipids/macromolecules at 1.3 ppm than did the brains of normal mice. The concentrations of glycine and lactate, however, were significantly higher in tumors than in normal brain. Copyright © 2014 John Wiley & Sons, Ltd.
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
10 CFR 712.19 - Removal from HRP.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program... immediately remove that individual from HRP duties pending a determination of the individual's reliability. A... HRP duties pending a determination of the individual's reliability is an interim, precautionary action...
Prioritizing Conservation of Ungulate Calving Resources in Multiple-Use Landscapes
Dzialak, Matthew R.; Harju, Seth M.; Osborn, Robert G.; Wondzell, John J.; Hayden-Wing, Larry D.; Winstead, Jeffrey B.; Webb, Stephen L.
2011-01-01
Background Conserving animal populations in places where human activity is increasing is an ongoing challenge in many parts of the world. We investigated how human activity interacted with maternal status and individual variation in behavior to affect reliability of spatially-explicit models intended to guide conservation of critical ungulate calving resources. We studied Rocky Mountain elk (Cervus elaphus) that occupy a region where 2900 natural gas wells have been drilled. Methodology/Principal Findings We present novel applications of generalized additive modeling to predict maternal status based on movement, and of random-effects resource selection models to provide population and individual-based inference on the effects of maternal status and human activity. We used a 2×2 factorial design (treatment vs. control) that included elk that were either parturient or non-parturient and in areas either with or without industrial development. Generalized additive models predicted maternal status (parturiency) correctly 93% of the time based on movement. Human activity played a larger role than maternal status in shaping resource use; elk showed strong spatiotemporal patterns of selection or avoidance and marked individual variation in developed areas, but no such pattern in undeveloped areas. This difference had direct consequences for landscape-level conservation planning. When relative probability of use was calculated across the study area, there was disparity throughout 72–88% of the landscape in terms of where conservation intervention should be prioritized depending on whether models were based on behavior in developed areas or undeveloped areas. Model validation showed that models based on behavior in developed areas had poor predictive accuracy, whereas the model based on behavior in undeveloped areas had high predictive accuracy. Conclusions/Significance By directly testing for differences between developed and undeveloped areas, and by modeling resource selection in a random-effects framework that provided individual-based inference, we conclude that: 1) amplified selection or avoidance behavior and individual variation, as responses to increasing human activity, complicate conservation planning in multiple-use landscapes, and 2) resource selection behavior in places where human activity is predictable or less dynamic may provide a more reliable basis from which to prioritize conservation action. PMID:21297866
The SACADA database for human reliability and human performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. James Chang; Dennis Bley; Lawrence Criscione
2014-05-01
Lack of appropriate and sufficient human performance data has been identified as a key factor affecting human reliability analysis (HRA) quality especially in the estimation of human error probability (HEP). The Scenario Authoring, Characterization, and Debriefing Application (SACADA) database was developed by the U.S. Nuclear Regulatory Commission (NRC) to address this data need. An agreement between NRC and the South Texas Project Nuclear Operating Company (STPNOC) was established to support the SACADA development with aims to make the SACADA tool suitable for implementation in the nuclear power plants' operator training program to collect operator performance information. The collected data wouldmore » support the STPNOC's operator training program and be shared with the NRC for improving HRA quality. This paper discusses the SACADA data taxonomy, the theoretical foundation, the prospective data to be generated from the SACADA raw data to inform human reliability and human performance, and the considerations on the use of simulator data for HRA. Each SACADA data point consists of two information segments: context and performance results. Context is a characterization of the performance challenges to task success. The performance results are the results of performing the task. The data taxonomy uses a macrocognitive functions model for the framework. At a high level, information is classified according to the macrocognitive functions of detecting the plant abnormality, understanding the abnormality, deciding the response plan, executing the response plan, and team related aspects (i.e., communication, teamwork, and supervision). The data are expected to be useful for analyzing the relations between context, error modes and error causes in human performance.« less
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Lohith, Talakad G; Zoghbi, Sami S; Morse, Cheryl L; Araneta, Maria D Ferraris; Barth, Vanessa N; Goebl, Nancy A; Tauscher, Johannes T; Pike, Victor W; Innis, Robert B; Fujita, Masahiro
2014-02-15
[(11)C]NOP-1A is a novel high-affinity PET ligand for imaging nociceptin/orphanin FQ peptide (NOP) receptors. Here, we report reproducibility and reliability measures of binding parameter estimates for [(11)C]NOP-1A binding in the brain of healthy humans. After intravenous injection of [(11)C]NOP-1A, PET scans were conducted twice on eleven healthy volunteers on the same (10/11 subjects) or different (1/11 subjects) days. Subjects underwent serial sampling of radial arterial blood to measure parent radioligand concentrations. Distribution volume (VT; a measure of receptor density) was determined by compartmental (one- and two-tissue) modeling in large regions and by simpler regression methods (graphical Logan and bilinear MA1) in both large regions and voxel data. Retest variability and intraclass correlation coefficient (ICC) of VT were determined as measures of reproducibility and reliability respectively. Regional [(11)C]NOP-1A uptake in the brain was high, with a peak radioactivity concentration of 4-7 SUV (standardized uptake value) and a rank order of putamen>cingulate cortex>cerebellum. Brain time-activity curves fitted well in 10 of 11 subjects by unconstrained two-tissue compartmental model. The retest variability of VT was moderately good across brain regions except cerebellum, and was similar across different modeling methods, averaging 12% for large regions and 14% for voxel-based methods. The retest reliability of VT was also moderately good in most brain regions, except thalamus and cerebellum, and was similar across different modeling methods averaging 0.46 for large regions and 0.48 for voxels having gray matter probability >20%. The lowest retest variability and highest retest reliability of VT were achieved by compartmental modeling for large regions, and by the parametric Logan method for voxel-based methods. Moderately good reproducibility and reliability measures of VT for [(11)C]NOP-1A make it a useful PET ligand for comparing NOP receptor binding between different subject groups or under different conditions in the same subject. Copyright © 2013. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shuai; Xiong, Lihua; Li, Hong-Yi
2015-05-26
Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less
MicroRNA Biomarkers of Toxicity in Biological Matrices
Biomarker measurements that reliably correlate with tissue injury and can be measured from sampling accessible biofluids offer enormous benefits in terms of cost, time, and convenience when assessing environmental and drug-induced toxicity in model systems or human cohorts. Micro...
A Framework for Reliability and Safety Analysis of Complex Space Missions
NASA Technical Reports Server (NTRS)
Evans, John W.; Groen, Frank; Wang, Lui; Austin, Rebekah; Witulski, Art; Mahadevan, Nagabhushan; Cornford, Steven L.; Feather, Martin S.; Lindsey, Nancy
2017-01-01
Long duration and complex mission scenarios are characteristics of NASA's human exploration of Mars, and will provide unprecedented challenges. Systems reliability and safety will become increasingly demanding and management of uncertainty will be increasingly important. NASA's current pioneering strategy recognizes and relies upon assurance of crew and asset safety. In this regard, flexibility to develop and innovate in the emergence of new design environments and methodologies, encompassing modeling of complex systems, is essential to meet the challenges.
Zabiniakov, N A; Prashchayeu, K I; Ryzhak, G A; Poltorackij, A N; Anosova, E I; Azarow, K S
2016-01-01
The investigation of reactive changes of blood cells in such diseases as COPD or asthma in people of different age groups is the very difficult problem. Simulating the same conditions in animals that occur in humans with these diseases can serve as a reliable practical model. It is possible because the changes which take places at the cellular level in animals might reflect a similar trend in the human body.
NDE reliability and probability of detection (POD) evolution and paradigm shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra
2014-02-18
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between “hoped for” versus validated or fielded failed hardware.« less
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
Effects of imperfect automation on decision making in a simulated command and control task.
Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja
2007-02-01
Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.
Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling
Price, Mathew; Cardenas, Valerie A.; Fein, George
2014-01-01
Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB’s FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I–V, VI–VII, and VIII–X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM approach offers superior reliability and validity. Extensions of CATK to cerebellar hemisphere parcels is envisioned. PMID:25192657
Kong, Xiangzhen; Liu, Wenxiu; He, Wei; Xu, Fuliu; Koelmans, Albert A; Mooij, Wolf M
2018-06-01
Freshwater shallow lake ecosystems provide valuable ecological services to human beings. However, these systems are subject to severe contamination from anthropogenic sources. Per- and polyfluoroalkyl substances (PFASs), including perfluorooctanoic acid (PFOA) and perfluorooctane sulphonate (PFOS), are among the contaminants that have received substantial attention, primarily due to abundant applications, environment persistence, and potential threats to ecological and human health. Understanding the environmental behavior of these contaminants in shallow freshwater lake environments using a modeling approach is therefore critical. Here, we characterize the fate, transport and transformation of both PFOA and PFOS in the fifth largest freshwater lake in China (Chaohu) during a two-year period (2013-2015) using a fugacity-based multimedia fate model. A reasonable agreement between the measured and modeled concentrations in various compartments confirms the model's reliability. The model successfully quantifies the environmental processes and identifies the major sources and input pathways of PFOA and PFOS to the Chaohu water body. Sensitivity analysis reveals the critical role of nonlinear Freundlich sorption, which contributes to a variable fraction of the model true uncertainty in different compartments (8.1%-93.6%). Through additional model scenario analyses, we further elucidate the importance of nonlinear Freundlich sorption that is essential for the reliable model performance. We also reveal the distinct composition of emission sources for the two contaminants, as the major sources are indirect soil volatilization and direct release from human activities for PFOA and PFOS, respectively. The present study is expected to provide implications for local management of PFASs pollution in Lake Chaohu and to contribute to developing a general model framework for the evaluation of PFASs in shallow lakes. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Radiation Measurements Performed with Active Detectors Relevant for Human Space Exploration
Narici, Livio; Berger, Thomas; Matthiä, Daniel; Reitz, Günther
2015-01-01
A reliable radiation risk assessment in space is a mandatory step for the development of countermeasures and long-duration mission planning in human spaceflight. Research in radiobiology provides information about possible risks linked to radiation. In addition, for a meaningful risk evaluation, the radiation exposure has to be assessed to a sufficient level of accuracy. Consequently, both the radiation models predicting the risks and the measurements used to validate such models must have an equivalent precision. Corresponding measurements can be performed both with passive and active devices. The former is easier to handle, cheaper, lighter, and smaller but they measure neither the time dependence of the radiation environment nor some of the details useful for a comprehensive radiation risk assessment. Active detectors provide most of these details and have been extensively used in the International Space Station. To easily access such an amount of data, a single point access is becoming essential. This review presents an ongoing work on the development of a tool that allows obtaining information about all relevant measurements performed with active detectors providing reliable inputs for radiation model validation. PMID:26697408
Radiation Measurements Performed with Active Detectors Relevant for Human Space Exploration.
Narici, Livio; Berger, Thomas; Matthiä, Daniel; Reitz, Günther
2015-01-01
A reliable radiation risk assessment in space is a mandatory step for the development of countermeasures and long-duration mission planning in human spaceflight. Research in radiobiology provides information about possible risks linked to radiation. In addition, for a meaningful risk evaluation, the radiation exposure has to be assessed to a sufficient level of accuracy. Consequently, both the radiation models predicting the risks and the measurements used to validate such models must have an equivalent precision. Corresponding measurements can be performed both with passive and active devices. The former is easier to handle, cheaper, lighter, and smaller but they measure neither the time dependence of the radiation environment nor some of the details useful for a comprehensive radiation risk assessment. Active detectors provide most of these details and have been extensively used in the International Space Station. To easily access such an amount of data, a single point access is becoming essential. This review presents an ongoing work on the development of a tool that allows obtaining information about all relevant measurements performed with active detectors providing reliable inputs for radiation model validation.
NASA Astrophysics Data System (ADS)
Borzí, Alfio; Caponigro, Marco
2016-09-01
The formulation of mathematical models for crowd dynamics is one current challenge in many fields of applied sciences. It involves the modelization of the complex behavior of a large number of individuals. In particular, the difficulty lays in describing emerging collective behaviors by means of a relatively small number of local interaction rules between individuals in a crowd. Clearly, the individual's free will involved in decision making processes and in the management of the social interactions cannot be described by a finite number of deterministic rules. On the other hand, in large crowds, this individual indeterminacy can be considered as a local fluctuation averaged to zero by the size of the crowd. While at the microscopic scale, using a system of coupled ODEs, the free will should be included in the mathematical description (e.g. with a stochastic term), the mesoscopic and macroscopic scales, modeled by PDEs, represent a powerful modelling tool that allows to neglect this feature and provide a reliable description. In this sense, the work by Bellomo, Clarke, Gibelli, Townsend, and Vreugdenhil [2] represents a mathematical-epistemological contribution towards the design of a reliable model of human behavior.
Tian, Feifei; Tan, Rui; Guo, Tailin; Zhou, Peng; Yang, Li
2013-07-01
Domain-peptide recognition and interaction are fundamentally important for eukaryotic signaling and regulatory networks. It is thus essential to quantitatively infer the binding stability and specificity of such interaction based upon large-scale but low-accurate complex structure models which could be readily obtained from sophisticated molecular modeling procedure. In the present study, a new method is described for the fast and reliable prediction of domain-peptide binding affinity with coarse-grained structure models. This method is designed to tolerate strong random noises involved in domain-peptide complex structures and uses statistical modeling approach to eliminate systematic bias associated with a group of investigated samples. As a paradigm, this method was employed to model and predict the binding behavior of various peptides to four evolutionarily unrelated peptide-recognition domains (PRDs), i.e. human amph SH3, human nherf PDZ, yeast syh GYF and yeast bmh 14-3-3, and moreover, we explored the molecular mechanism and biological implication underlying the binding of cognate and noncognate peptide ligands to their domain receptors. It is expected that the newly proposed method could be further used to perform genome-wide inference of domain-peptide binding at three-dimensional structure level. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Denys Yemshanov; Frank H. Koch; Mark J. Ducey; Marty Siltanen; Kirsty Wilson; Klaus Koehler
2013-01-01
Long-distance introductions of alien species are often driven by socioeconomic factors, such that conventional âbiologicalâ invasion models may not be capable of estimating spread fully and reliably. In this study, we demonstrate a new technique for assessing and reconstructing human-mediated pathways of alien forest species entries to major settlements in Canada via...
2001-09-01
structure model, motion model, physical model, and possibly many other characteristics depending on the application [Ref. 4]. While the film industry has...applications. The film industry relies on this technology almost exclusively, as it is highly reliable under controlled conditions. Since optical tracking...Wavefront. Maya has been used extensively in the film industry to provide lifelike animation, and is adept at handling 3D objects [Ref. 27]. Maya can
Deziel, Mark R.; Heine, Henry; Louie, Arnold; Kao, Mark; Byrne, William R.; Basset, Jennifer; Miller, Lynda; Bush, Karen; Kelly, Michael; Drusano, G. L.
2005-01-01
Expanded options for treatments directed against pathogens that can be used for bioterrorism are urgently needed. Treatment regimens directed against such pathogens can be identified only by using data derived from in vitro and animal studies. It is crucial that these studies reliably predict the efficacy of proposed treatments in humans. The objective of this study was to identify a levofloxacin treatment regimen that will serve as an effective therapy for Bacillus anthracis infections and postexposure prophylaxis. An in vitro hollow-fiber infection model that replicates the pharmacokinetic profile of levofloxacin observed in humans (half-life [t1/2], 7.5 h) or in animals, such as the mouse or the rhesus monkey (t1/2, ∼2 h), was used to evaluate a proposed indication for levofloxacin (500 mg once daily) for the treatment of Bacillus anthracis infections. The results obtained with the in vitro model served as the basis for the doses and the dose schedules that were evaluated in the mouse inhalational anthrax model. The effects of levofloxacin and ciprofloxacin treatment were compared to those of no treatment (untreated controls). The main outcome measure in the in vitro hollow-fiber infection model was a persistent reduction of culture density (≥4 log10 reduction) and prevention of the emergence of levofloxacin-resistant organisms. In the mouse inhalational anthrax model the main outcome measure was survival. The results indicated that levofloxacin given once daily with simulated human pharmacokinetics effectively sterilized Bacillus anthracis cultures. By using a simulated animal pharmacokinetic profile, a once-daily dosing regimen that provided a human-equivalent exposure failed to sterilize the cultures. Dosing regimens that “partially humanized” levofloxacin exposures within the constraints of animal pharmacokinetics reproduced the antimicrobial efficacy seen with human pharmacokinetics. In a mouse inhalational anthrax model, once-daily dosing was significantly inferior (survival end point) to regimens of dosing every 12 h or every 6 h with identical total daily levofloxacin doses. These results demonstrate the predictive value of the in vitro hollow-fiber infection model with respect to the success or the failure of treatment regimens in animals. Furthermore, the model permits the evaluation of treatment regimens that “humanize” antibiotic exposures in animal models, enhancing the confidence with which animal models may be used to reliably predict the efficacies of proposed antibiotic treatments in humans in situations (e.g., the release of pathogens as agents of bioterrorism or emerging infectious diseases) where human trials cannot be performed. A treatment regimen effective in rhesus monkeys was identified. PMID:16304178
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
de Brugerolle, Anne
2007-01-01
SkinEthic Laboratories is a France-based biotechnology company recognised as the world leader in tissue engineering. SkinEthic is devoted to develop and produce reliable and robust in vitro alternative methods to animal use in cosmetic, chemical and pharmaceutical industries. SkinEthic models provide relevant tools for efficacy and safety screening tests in order to support an integrated decision-making during research and development phases. Some screening tests are referenced and validated as alternatives to animal use (Episkin), others are in the process of validation under ECVAM and OECD guidelines. SkinEthic laboratories provide a unique and joined experience of more than 20 years from Episkin SNC and SkinEthic SA. Their unique cell culture process allows in vitro reconstructed human tissues with well characterized histology, functionality and ultrastructure features to be mass produced. Our product line includes skin models: a reconstructed human epidermis with a collagen layer, Episkin, reconstructed human epidermis without or with melanocytes (with a tanning degree from phototype II to VI) and a reconstructed human epithelium, i.e. cornea, and other mucosa, i.e. oral, gingival, oesophageal and vaginal. Our philosophy is based on 3 main commitments: to support our customers by providing robust and reliable models, to ensure training and education in using validated protocols, allowing a large array of raw materials, active ingredients and finished products in solid, liquid, powder, cream or gel form to be screened, and, to provide a dedicated service to our partners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bley, D.C.; Cooper, S.E.; Forester, J.A.
ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.
2016-07-14
applicability of the sensor model in the context under consideration. A similar information flow can be considered for obtaining direct reliability of an... Modeling , Bex Concepts Human Intelligence Simulation USE CASES Army: Opns in Megacities, Syrian Civil War Navy: Piracy (NATO, Book), Autonomous ISR...2007) 6 [25] Bex, F. and Verheij, B ., Story Schemes for Argumentation about the Facts of a Crime, Computational Models of Narrative: Papers from the
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
Simulation: Moving from Technology Challenge to Human Factors Success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gould, Derek A., E-mail: dgould@liv.ac.uk; Chalmers, Nicholas; Johnson, Sheena J.
2012-06-15
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used.
A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.
Kim, Joo H; Roberts, Dustyn
2015-09-01
Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
A research perspective on white-tailed deer overabundance in the northeastern United States
William M. Healy; David S. deCalesta; Susan L. Stout
1997-01-01
Resolving issues of deer (Odocoileus spp.) over-abundance will require gaining more reliable knowledge about their role in ecosystem dynamics. Science can contribute by advancing knowledge in 4 overlapping spheres of research: model development, measurement techniques, population management, and human behavior.
Observing Consistency in Online Communication Patterns for User Re-Identification.
Adeyemi, Ikuesan Richard; Razak, Shukor Abd; Salleh, Mazleena; Venter, Hein S
2016-01-01
Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas.
Psikuta, Agnes; Kuklane, Kalev; Bogdan, Anna; Havenith, George; Annaheim, Simon; Rossi, René M
2016-03-01
Combining the strengths of an advanced mathematical model of human physiology and a thermal manikin is a new paradigm for simulating thermal behaviour of humans. However, the forerunners of such adaptive manikins showed some substantial limitations. This project aimed to determine the opportunities and constraints of the existing thermal manikins when dynamically controlled by a mathematical model of human thermal physiology. Four thermal manikins were selected and evaluated for their heat flux measurement uncertainty including lateral heat flows between manikin body parts and the response of each sector to the frequent change of the set-point temperature typical when using a physiological model for control. In general, all evaluated manikins are suitable for coupling with a physiological model with some recommendations for further improvement of manikin dynamic performance. The proposed methodology is useful to improve the performance of the adaptive manikins and help to provide a reliable and versatile tool for the broad research and development domain of clothing, automotive and building engineering.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait.
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E; Del-Ama, Antonio J; Dimbwadyo, Iris; Moreno, Juan C; Florez, Julian; Pons, Jose L
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E.; del-Ama, Antonio J.; Dimbwadyo, Iris; Moreno, Juan C.; Florez, Julian; Pons, Jose L.
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton. PMID:29755336
2001-01-01
by Peter Wright, University of York, UK and Colin Drury , University of Buffalo. Session 3 was chaired by Reiner Onken, University of Bundeswehr, GE...proper inspection intervals; too few inspections may give rise to accidents whilst too many can increase costs . Drury has reviewed human factors studies on...thus search, whilst the cost of a miss or false rejection affects the decision stage. To furnish this model of aircraft inspection, Drury performed a
Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian
2016-05-18
Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4.31 % and 5.26 % for the two pulse sequences, respectively, while the ICCs were 0.99 for both. For rotation measures, the CVs were 3.19 % and 2.44 % for the two pulse sequences with the ICCs being 0.98 and 0.97, respectively. A novel biplanar imaging approach also yielded high reliability with mean CVs of 2.66 % and 3.39 % for translation in the x- and z-planes, respectively, and ICCs of 0.97 in both planes. This work provides basic proof-of-concept for a reliable marker-less non-ionizing-radiation-based quasi-dynamic motion quantification technique that can potentially be developed into a tool for real-time joint kinematics analysis.
Impedances of the ear estimated with intracochlear pressures in normal human temporal bones
NASA Astrophysics Data System (ADS)
Frear, Darcy; Guan, Xiying; Stieger, Christof; Nakajima, Hideko Heidi
2018-05-01
We have measured intracochlear pressures and velocities of stapes and round window (RW) evoked by air conduction (AC) stimulation in many fresh human cadaveric specimens. Our techniques have improved through the years to ensure reliable pressure sensor measurements in the scala vestibuli and scala tympani. Using these measurements, we have calculated impedances of the middle and inner ear (cochlear partition, RW, and physiological leakage impedance in scala vestibuli) to create a lumped element model. Our model simulates our data and allows us to understand the mechanisms involved in air-conducted sound transmission. In the future this model will be used as a tool to understand transmission mechanisms of various stimuli and to help create more sophisticated models of the ear.
NASA Astrophysics Data System (ADS)
Li, Lin; Zeng, Li; Lin, Zi-Jing; Cazzell, Mary; Liu, Hanli
2015-05-01
Test-retest reliability of neuroimaging measurements is an important concern in the investigation of cognitive functions in the human brain. To date, intraclass correlation coefficients (ICCs), originally used in inter-rater reliability studies in behavioral sciences, have become commonly used metrics in reliability studies on neuroimaging and functional near-infrared spectroscopy (fNIRS). However, as there are six popular forms of ICC, the adequateness of the comprehensive understanding of ICCs will affect how one may appropriately select, use, and interpret ICCs toward a reliability study. We first offer a brief review and tutorial on the statistical rationale of ICCs, including their underlying analysis of variance models and technical definitions, in the context of assessment on intertest reliability. Second, we provide general guidelines on the selection and interpretation of ICCs. Third, we illustrate the proposed approach by using an actual research study to assess intertest reliability of fNIRS-based, volumetric diffuse optical tomography of brain activities stimulated by a risk decision-making protocol. Last, special issues that may arise in reliability assessment using ICCs are discussed and solutions are suggested.
QSAR models of human data can enrich or replace LLNA testing for human skin sensitization
Alves, Vinicius M.; Capuzzi, Stephen J.; Muratov, Eugene; Braga, Rodolpho C.; Thornton, Thomas; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2016-01-01
Skin sensitization is a major environmental and occupational health hazard. Although many chemicals have been evaluated in humans, there have been no efforts to model these data to date. We have compiled, curated, analyzed, and compared the available human and LLNA data. Using these data, we have developed reliable computational models and applied them for virtual screening of chemical libraries to identify putative skin sensitizers. The overall concordance between murine LLNA and human skin sensitization responses for a set of 135 unique chemicals was low (R = 28-43%), although several chemical classes had high concordance. We have succeeded to develop predictive QSAR models of all available human data with the external correct classification rate of 71%. A consensus model integrating concordant QSAR predictions and LLNA results afforded a higher CCR of 82% but at the expense of the reduced external dataset coverage (52%). We used the developed QSAR models for virtual screening of CosIng database and identified 1061 putative skin sensitizers; for seventeen of these compounds, we found published evidence of their skin sensitization effects. Models reported herein provide more accurate alternative to LLNA testing for human skin sensitization assessment across diverse chemical data. In addition, they can also be used to guide the structural optimization of toxic compounds to reduce their skin sensitization potential. PMID:28630595
NASA Astrophysics Data System (ADS)
Xin, Chen; Huang, Ji-Ping
2017-12-01
Agent-based modeling and controlled human experiments serve as two fundamental research methods in the field of econophysics. Agent-based modeling has been in development for over 20 years, but how to design virtual agents with high levels of human-like "intelligence" remains a challenge. On the other hand, experimental econophysics is an emerging field; however, there is a lack of experience and paradigms related to the field. Here, we review some of the most recent research results obtained through the use of these two methods concerning financial problems such as chaos, leverage, and business cycles. We also review the principles behind assessments of agents' intelligence levels, and some relevant designs for human experiments. The main theme of this review is to show that by combining theory, agent-based modeling, and controlled human experiments, one can garner more reliable and credible results on account of a better verification of theory; accordingly, this way, a wider range of economic and financial problems and phenomena can be studied.
Mapping the ecological networks of microbial communities.
Xiao, Yandong; Angulo, Marco Tulio; Friedman, Jonathan; Waldor, Matthew K; Weiss, Scott T; Liu, Yang-Yu
2017-12-11
Mapping the ecological networks of microbial communities is a necessary step toward understanding their assembly rules and predicting their temporal behavior. However, existing methods require assuming a particular population dynamics model, which is not known a priori. Moreover, those methods require fitting longitudinal abundance data, which are often not informative enough for reliable inference. To overcome these limitations, here we develop a new method based on steady-state abundance data. Our method can infer the network topology and inter-taxa interaction types without assuming any particular population dynamics model. Additionally, when the population dynamics is assumed to follow the classic Generalized Lotka-Volterra model, our method can infer the inter-taxa interaction strengths and intrinsic growth rates. We systematically validate our method using simulated data, and then apply it to four experimental data sets. Our method represents a key step towards reliable modeling of complex, real-world microbial communities, such as the human gut microbiota.
10 CFR 712.12 - HRP implementation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...) Report any observed or reported behavior or condition of another HRP-certified individual that could indicate a reliability concern, including those behaviors and conditions listed in § 712.13(c), to a...
Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M
2017-02-01
Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.
Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion
NASA Astrophysics Data System (ADS)
Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger
2007-12-01
Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.
Hug, T; Maurer, M
2012-01-01
Distributed (decentralized) wastewater treatment can, in many situations, be a valuable alternative to a centralized sewer network and wastewater treatment plant. However, it is critical for its acceptance whether the same overall treatment performance can be achieved without on-site staff, and whether its performance can be measured. In this paper we argue and illustrate that the system performance depends not only on the design performance and reliability of the individual treatment units, but also significantly on the monitoring scheme, i.e. on the reliability of the process information. For this purpose, we present a simple model of a fleet of identical treatment units. Thereby, their performance depends on four stochastic variables: the reliability of the treatment unit, the respond time for the repair of failed units, the reliability of on-line sensors, and the frequency of routine inspections. The simulated scenarios show a significant difference between the true performance and the observations by the sensors and inspections. The results also illustrate the trade-off between investing in reactor and sensor technology and in human interventions in order to achieve a certain target performance. Modeling can quantify such effects and thereby support the identification of requirements for the centralized monitoring of distributed treatment units. The model approach is generic and can be extended and applied to various distributed wastewater treatment technologies and contexts.
NASA Astrophysics Data System (ADS)
Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng
2017-12-01
In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.
Ni, Ke; Liu, Ming; Zheng, Jian; Wen, Liyan; Chen, Qingyun; Xiang, Zheng; Lam, Kowk-Tai; Liu, Yinping; Chan, Godfrey Chi-Fung; Lau, Yu-Lung; Tu, Wenwei
2018-06-01
Pulmonary fibrosis is a chronic progressive lung disease with few treatments. Human mesenchymal stem cells (MSCs) have been shown to be beneficial in pulmonary fibrosis because they have immunomodulatory capacity. However, there is no reliable model to test the therapeutic effect of human MSCs in vivo. To mimic pulmonary fibrosis in humans, we established a novel bleomycin-induced pulmonary fibrosis model in humanized mice. With this model, the benefit of human MSCs in pulmonary fibrosis and the underlying mechanisms were investigated. In addition, the relevant parameters in patients with pulmonary fibrosis were examined. We demonstrate that human CD8 + T cells were critical for the induction of pulmonary fibrosis in humanized mice. Human MSCs could alleviate pulmonary fibrosis and improve lung function by suppressing bleomycin-induced human T-cell infiltration and proinflammatory cytokine production in the lungs of humanized mice. Importantly, alleviation of pulmonary fibrosis by human MSCs was mediated by the PD-1/programmed death-ligand 1 pathway. Moreover, abnormal PD-1 expression was found in circulating T cells and lung tissues of patients with pulmonary fibrosis. Our study supports the potential benefit of targeting the PD-1/programmed death-ligand 1 pathway in the treatment of pulmonary fibrosis.
NASA Astrophysics Data System (ADS)
Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.
2018-02-01
To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.
Current status: Animal models of nausea
NASA Technical Reports Server (NTRS)
Fox, Robert A.
1991-01-01
The advantages, and possible benefits of a valid, reliable animal model for nausea are discussed, and difficulties inherent to the development of a model are considered. A principle problem for developing models arises because nausea is a subjective sensation that can be identified only in humans. Several putative measures of nausea in animals are considered, with more detailed consideration directed to variation in cardiac rate, levels of vasopressin, and conditioned taste aversion. Demonstration that putative measures are associated with reported nausea in humans is proposed as a requirement for validating measures to be used in animal models. The necessity for a 'real-time' measure of nausea is proposed as an important factor for future research; and the need for improved understanding of the neuroanatomy underlying the emetic syndrome is discussed.
Rating the raters in a mixed model: An approach to deciphering the rater reliability
NASA Astrophysics Data System (ADS)
Shang, Junfeng; Wang, Yougui
2013-05-01
Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.
Hydrologic Design in the Anthropocene
NASA Astrophysics Data System (ADS)
Vogel, R. M.; Farmer, W. H.; Read, L.
2014-12-01
In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.
Reliability of the Language ENvironment Analysis system (LENA™) in European French.
Canault, Mélanie; Le Normand, Marie-Thérèse; Foudil, Samy; Loundon, Natalie; Thai-Van, Hung
2016-09-01
In this study, we examined the accuracy of the Language ENvironment Analysis (LENA) system in European French. LENA is a digital recording device with software that facilitates the collection and analysis of audio recordings from young children, providing automated measures of the speech overheard and produced by the child. Eighteen native French-speaking children, who were divided into six age groups ranging from 3 to 48 months old, were recorded about 10-16 h per day, three days a week. A total of 324 samples (six 10-min chunks of recordings) were selected and then transcribed according to the CHAT format. Simple and mixed linear models between the LENA and human adult word count (AWC) and child vocalization count (CVC) estimates were performed, to determine to what extent the automatic and the human methods agreed. Both the AWC and CVC estimates were very reliable (r = .64 and .71, respectively) for the 324 samples. When controlling the random factors of participants and recordings, 1 h was sufficient to obtain a reliable sample. It was, however, found that two age groups (7-12 months and 13-18 months) had a significant effect on the AWC data and that the second day of recording had a significant effect on the CVC data. When noise-related factors were added to the model, only a significant effect of signal-to-noise ratio was found on the AWC data. All of these findings and their clinical implications are discussed, providing strong support for the reliability of LENA in French.
Zhou, Yong; Mu, Haiying; Jiang, Jianjun; Zhang, Li
2012-01-01
Currently, there is a trend in nuclear power plants (NPPs) toward introducing digital and computer technologies into main control rooms (MCRs). Safe generation of electric power in NPPs requires reliable performance of cognitive tasks such as fault detection, diagnosis, and response planning. The digitalization of MCRs has dramatically changed the whole operating environment, and the ways operators interact with the plant systems. If the design and implementation of the digital technology is incompatible with operators' cognitive characteristics, it may have negative effects on operators' cognitive reliability. Firstly, on the basis of three essential prerequisites for successful cognitive tasks, a causal model is constructed to reveal the typical human performance issues arising from digitalization. The cognitive mechanisms which they impact cognitive reliability are analyzed in detail. Then, Bayesian inference is used to quantify and prioritize the influences of these factors. It suggests that interface management and unbalanced workload distribution have more significant impacts on operators' cognitive reliability.
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Global Assessment of Exploitable Surface Reservoir Storage under Climate Change
NASA Astrophysics Data System (ADS)
Liu, L.; Parkinson, S.; Gidden, M.; Byers, E.; Satoh, Y.; Riahi, K.
2016-12-01
Surface water reservoirs provide us with reliable water supply systems, hydropower generation, flood control, and recreation services. Reliable reservoirs can be robust measures for water security and can help smooth out challenging seasonal variability of river flows. Yet, reservoirs also cause flow fragmentation in rivers and can lead to flooding of upstream areas, thereby displacing existing land-uses and ecosystems. The anticipated population growth, land use and climate change in many regions globally suggest a critical need to assess the potential for appropriate reservoir capacity that can balance rising demands with long-term water security. In this research, we assessed exploitable reservoir potential under climate change and human development constraints by deriving storage-yield relationships for 235 river basins globally. The storage-yield relationships map the amount of storage capacity required to meet a given water demand based on a 30-year inflow sequence. Runoff data is simulated with an ensemble of Global Hydrological Models (GHMs) for each of five bias-corrected general circulation models (GCMs) under four climate change pathways. These data are used to define future 30-year inflows in each river basin for time period between 2010 and 2080. The calculated capacity is then combined with geographical information of environmental and human development exclusion zones to further limit the storage capacity expansion potential in each basin. We investigated the reliability of reservoir potentials across different climate change scenarios and Shared Socioeconomic Pathways (SSPs) to identify river basins where reservoir expansion will be particularly challenging. Preliminary results suggest large disparities in reservoir potential across basins: some basins have already approached exploitable reserves, while some others display abundant potential. Exclusions zones pose significant impact on the amount of actual exploitable storage and firm yields worldwide: 30% of reservoir potential would be unavailable because of land occupation by environmental and human development. Results from this study will help decision makers to understand the reliability of infrastructure systems particularly sensitive to future water availability.
Marostica, Eleonora; Van Ammel, Karel; Teisman, Ard; Gallacher, David; Van Bocxlaer, Jan; De Ridder, Filip; Boussery, Koen; Vermeulen, An
2016-07-01
Inhibiting the human ether-a-go-go-related gene (hERG)-encoded potassium ion channel is positively correlated with QT-interval prolongation in vivo, which is considered a risk factor for the occurrence of Torsades de Pointes (TdP). A pharmacokinetic/pharmacodynamic model was developed for four compounds that reached the clinic, to relate drug-induced QT-interval change in awake dogs and humans and to derive a translational scaling factor a 1. Overall, dogs were more sensitive than humans to QT-interval change, an a 1 of 1.5 was found, and a 10% current inhibition in vitro produced a higher percent QT-interval change in dogs as compared to humans. The QT-interval changes in dogs were predictive for humans. In vitro and in vivo information could reliably describe the effects in humans. Robust translational knowledge is likely to reduce the need for expensive thorough QT studies; therefore, expanding this work to more compounds is recommended.
NASA Astrophysics Data System (ADS)
Scarella, Gilles; Clatz, Olivier; Lanteri, Stéphane; Beaume, Grégory; Oudot, Steve; Pons, Jean-Philippe; Piperno, Sergo; Joly, Patrick; Wiart, Joe
2006-06-01
The ever-rising diffusion of cellular phones has brought about an increased concern for the possible consequences of electromagnetic radiation on human health. Possible thermal effects have been investigated, via experimentation or simulation, by several research projects in the last decade. Concerning numerical modeling, the power absorption in a user's head is generally computed using discretized models built from clinical MRI data. The vast majority of such numerical studies have been conducted using Finite Differences Time Domain methods, although strong limitations of their accuracy are due to heterogeneity, poor definition of the detailed structures of head tissues (staircasing effects), etc. In order to propose numerical modeling using Finite Element or Discontinuous Galerkin Time Domain methods, reliable automated tools for the unstructured discretization of human heads are also needed. Results presented in this article aim at filling the gap between human head MRI images and the accurate numerical modeling of wave propagation in biological tissues and its thermal effects. To cite this article: G. Scarella et al., C. R. Physique 7 (2006).
Inhibition of PDGFR signaling prevents muscular fatty infiltration after rotator cuff tear in mice.
Shirasawa, Hideyuki; Matsumura, Noboru; Shimoda, Masayuki; Oki, Satoshi; Yoda, Masaki; Tohmonda, Takahide; Kanai, Yae; Matsumoto, Morio; Nakamura, Masaya; Horiuchi, Keisuke
2017-01-31
Fatty infiltration in muscle is often observed in patients with sizable rotator cuff tear (RCT) and is thought to be an irreversible event that significantly compromises muscle plasticity and contraction strength. These changes in the mechanical properties of the affected muscle render surgical repair of RCT highly formidable. Therefore, it is important to learn more about the pathology of fatty infiltration to prevent this undesired condition. In the present study, we aimed to generate a mouse model that can reliably recapitulate some of the important characteristics of muscular fatty infiltration after RCT in humans. We found that fatty infiltration can be efficiently induced by a combination of the following procedures: denervation of the suprascapular nerve, transection of the rotator cuff tendon, and resection of the humeral head. Using this model, we found that platelet-derived growth factor receptor-α (PDGFRα)-positive mesenchymal stem cells are induced after this intervention and that inhibition of PDGFR signaling by imatinib treatment can significantly suppress fatty infiltration. Taken together, the present study presents a reliable fatty infiltration mouse model and suggests a key role for PDGFRα-positive mesenchymal stem cells in the process of fatty infiltration after RCT in humans.
Inhibition of PDGFR signaling prevents muscular fatty infiltration after rotator cuff tear in mice
Shirasawa, Hideyuki; Matsumura, Noboru; Shimoda, Masayuki; Oki, Satoshi; Yoda, Masaki; Tohmonda, Takahide; Kanai, Yae; Matsumoto, Morio; Nakamura, Masaya; Horiuchi, Keisuke
2017-01-01
Fatty infiltration in muscle is often observed in patients with sizable rotator cuff tear (RCT) and is thought to be an irreversible event that significantly compromises muscle plasticity and contraction strength. These changes in the mechanical properties of the affected muscle render surgical repair of RCT highly formidable. Therefore, it is important to learn more about the pathology of fatty infiltration to prevent this undesired condition. In the present study, we aimed to generate a mouse model that can reliably recapitulate some of the important characteristics of muscular fatty infiltration after RCT in humans. We found that fatty infiltration can be efficiently induced by a combination of the following procedures: denervation of the suprascapular nerve, transection of the rotator cuff tendon, and resection of the humeral head. Using this model, we found that platelet-derived growth factor receptor-α (PDGFRα)-positive mesenchymal stem cells are induced after this intervention and that inhibition of PDGFR signaling by imatinib treatment can significantly suppress fatty infiltration. Taken together, the present study presents a reliable fatty infiltration mouse model and suggests a key role for PDGFRα-positive mesenchymal stem cells in the process of fatty infiltration after RCT in humans. PMID:28139720
Kroll, Tina; Elmenhorst, David; Matusch, Andreas; Wedekind, Franziska; Weisshaupt, Angela; Beer, Simone; Bauer, Andreas
2013-08-01
While the selective 5-hydroxytryptamine type 2a receptor (5-HT2AR) radiotracer [18F]altanserin is well established in humans, the present study evaluated its suitability for quantifying cerebral 5-HT2ARs with positron emission tomography (PET) in albino rats. Ten Sprague Dawley rats underwent 180 min PET scans with arterial blood sampling. Reference tissue methods were evaluated on the basis of invasive kinetic models with metabolite-corrected arterial input functions. In vivo 5-HT2AR quantification with PET was validated by in vitro autoradiographic saturation experiments in the same animals. Overall brain uptake of [18F]altanserin was reliably quantified by invasive and non-invasive models with the cerebellum as reference region shown by linear correlation of outcome parameters. Unlike in humans, no lipophilic metabolites occurred so that brain activity derived solely from parent compound. PET data correlated very well with in vitro autoradiographic data of the same animals. [18F]Altanserin PET is a reliable tool for in vivo quantification of 5-HT2AR availability in albino rats. Models based on both blood input and reference tissue describe radiotracer kinetics adequately. Low cerebral tracer uptake might, however, cause restrictions in experimental usage.
The Importance of HRA in Human Space Flight: Understanding the Risks
NASA Technical Reports Server (NTRS)
Hamlin, Teri
2010-01-01
Human performance is critical to crew safety during space missions. Humans interact with hardware and software during ground processing, normal flight, and in response to events. Human interactions with hardware and software can cause Loss of Crew and/or Vehicle (LOCV) through improper actions, or may prevent LOCV through recovery and control actions. Humans have the ability to deal with complex situations and system interactions beyond the capability of machines. Human Reliability Analysis (HRA) is a method used to qualitatively and quantitatively assess the occurrence of human failures that affect availability and reliability of complex systems. Modeling human actions with their corresponding failure probabilities in a Probabilistic Risk Assessment (PRA) provides a more complete picture of system risks and risk contributions. A high-quality HRA can provide valuable information on potential areas for improvement, including training, procedures, human interfaces design, and the need for automation. Modeling human error has always been a challenge in part because performance data is not always readily available. For spaceflight, the challenge is amplified not only because of the small number of participants and limited amount of performance data available, but also due to the lack of definition of the unique factors influencing human performance in space. These factors, called performance shaping factors in HRA terminology, are used in HRA techniques to modify basic human error probabilities in order to capture the context of an analyzed task. Many of the human error modeling techniques were developed within the context of nuclear power plants and therefore the methodologies do not address spaceflight factors such as the effects of microgravity and longer duration missions. This presentation will describe the types of human error risks which have shown up as risk drivers in the Shuttle PRA which may be applicable to commercial space flight. As with other large PRAs of complex machines, human error in the Shuttle PRA proved to be an important contributor (12 percent) to LOCV. An existing HRA technique was adapted for use in the Shuttle PRA, but additional guidance and improvements are needed to make the HRA task in space-related PRAs easier and more accurate. Therefore, this presentation will also outline plans for expanding current HRA methodology to more explicitly cover spaceflight performance shaping factors.
Models of wound healing: an emphasis on clinical studies.
Wilhelm, K-P; Wilhelm, D; Bielfeldt, S
2017-02-01
The healing of wounds has always provided challenges for the medical community whether chronic or acute. Understanding the processes which enable wounds to heal is primarily carried out by the use of models, in vitro, animal and human. It is generally accepted that the use of human models offers the best opportunity to understand the factors that influence wound healing as well as to evaluate efficacy of treatments applied to wounds. The objective of this article is to provide an overview of the different methodologies that are currently used to experimentally induce wounds of various depths in human volunteers and examines the information that may be gained from them. There is a number of human volunteer healing models available varying in their invasiveness to reflect the different possible depth levels of wounds. Currently available wound healing models include sequential tape stripping, suction blister, abrasion, laser, dermatome, and biopsy techniques. The various techniques can be utilized to induce wounds of variable depth, from removing solely the stratum corneum barrier, the epidermis to even split-thickness or full thickness wounds. Depending on the study objective, a number of models exist to study wound healing in humans. These models provide efficient and reliable results to evaluate treatment modalities. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Launch and Assembly Reliability Analysis for Mars Human Space Exploration Missions
NASA Technical Reports Server (NTRS)
Cates, Grant R.; Stromgren, Chel; Cirillo, William M.; Goodliff, Kandyce E.
2013-01-01
NASA s long-range goal is focused upon human exploration of Mars. Missions to Mars will require campaigns of multiple launches to assemble Mars Transfer Vehicles in Earth orbit. Launch campaigns are subject to delays, launch vehicles can fail to place their payloads into the required orbit, and spacecraft may fail during the assembly process or while loitering prior to the Trans-Mars Injection (TMI) burn. Additionally, missions to Mars have constrained departure windows lasting approximately sixty days that repeat approximately every two years. Ensuring high reliability of launching and assembling all required elements in time to support the TMI window will be a key enabler to mission success. This paper describes an integrated methodology for analyzing and improving the reliability of the launch and assembly campaign phase. A discrete event simulation involves several pertinent risk factors including, but not limited to: manufacturing completion; transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to TMI. The model accommodates varying numbers of launches, including the potential for spare launches. Having a spare launch capability provides significant improvement to mission success.
A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.
Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A
2016-04-01
Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Miller, James; Leggett, Jay; Kramer-White, Julie
2008-01-01
A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy.
Space Mission Human Reliability Analysis (HRA) Project
NASA Technical Reports Server (NTRS)
Boyer, Roger
2014-01-01
The purpose of the Space Mission Human Reliability Analysis (HRA) Project is to extend current ground-based HRA risk prediction techniques to a long-duration, space-based tool. Ground-based HRA methodology has been shown to be a reasonable tool for short-duration space missions, such as Space Shuttle and lunar fly-bys. However, longer-duration deep-space missions, such as asteroid and Mars missions, will require the crew to be in space for as long as 400 to 900 day missions with periods of extended autonomy and self-sufficiency. Current indications show higher risk due to fatigue, physiological effects due to extended low gravity environments, and others, may impact HRA predictions. For this project, Safety & Mission Assurance (S&MA) will work with Human Health & Performance (HH&P) to establish what is currently used to assess human reliabiilty for human space programs, identify human performance factors that may be sensitive to long duration space flight, collect available historical data, and update current tools to account for performance shaping factors believed to be important to such missions. This effort will also contribute data to the Human Performance Data Repository and influence the Space Human Factors Engineering research risks and gaps (part of the HRP Program). An accurate risk predictor mitigates Loss of Crew (LOC) and Loss of Mission (LOM).The end result will be an updated HRA model that can effectively predict risk on long-duration missions.
Söderlund, Johan; Lindskog, Maria
2018-04-23
The diagnosis of a mental disorder generally depends on clinical observations and phenomenological symptoms reported by the patient. The definition of a given diagnosis is criteria based and relies on the ability to accurately interpret subjective symptoms and complex behavior. This type of diagnosis comprises a challenge to translate to reliable animal models, and these translational uncertainties hamper the development of new treatments. In this review, we will discuss how depressive-like behavior can be induced in rodents, and the relationship between these models and depression in humans. Specifically, we suggest similarities between triggers of depressive-like behavior in animal models and human conditions known to increase the risk of depression, for example exhaustion and bullying. Although we acknowledge the potential problems in comparing animal findings to human conditions, such comparisons are useful for understanding the complexity of depression, and we highlight the need to develop clinical diagnoses and animal models in parallel to overcome translational uncertainties.
Reliability of Craniofacial Superimposition Using Three-Dimension Skull Model.
Gaudio, Daniel; Olivieri, Lara; De Angelis, Danilo; Poppa, Pasquale; Galassi, Andrea; Cattaneo, Cristina
2016-01-01
Craniofacial superimposition is a technique potentially useful for the identification of unidentified human remains if a photo of the missing person is available. We have tested the reliability of the 2D-3D computer-aided nonautomatic superimposition techniques. Three-dimension laser scans of five skulls and ten photographs were overlaid with an imaging software. The resulting superimpositions were evaluated using three methods: craniofacial landmarks, morphological features, and a combination of the two. A 3D model of each skull without its mandible was tested for superimposition; we also evaluated whether separating skulls by sex would increase correct identifications. Results show that the landmark method employing the entire skull is the more reliable one (5/5 correct identifications, 40% false positives [FP]), regardless of sex. However, the persistence of a high percentage of FP in all the methods evaluated indicates that these methods are unreliable for positive identification although the landmark-only method could be useful for exclusion. © 2015 American Academy of Forensic Sciences.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)
NASA Technical Reports Server (NTRS)
Mott, Diana L.; Bigler, Mark A.
2017-01-01
NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E
2017-10-01
A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, p<0.001) and was sensitive to change following training in acute and mental health settings, across professional groups (p<0.001). Confirmatory factor analysis revealed an adequate model fit (RMSEA=0.066). The Human Factors Skills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.
NASA Astrophysics Data System (ADS)
Felfelani, Farshid; Wada, Yoshihide; Longuevergne, Laurent; Pokhrel, Yadu N.
2017-10-01
Hydrological models and the data derived from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have been widely used to study the variations in terrestrial water storage (TWS) over large regions. However, both GRACE products and model results suffer from inherent uncertainties, calling for the need to make a combined use of GRACE and models to examine the variations in total TWS and their individual components, especially in relation to natural and human-induced changes in the terrestrial water cycle. In this study, we use the results from two state-of-the-art hydrological models and different GRACE spherical harmonic products to examine the variations in TWS and its individual components, and to attribute the changes to natural and human-induced factors over large global river basins. Analysis of the spatial patterns of the long-term trend in TWS from the two models and GRACE suggests that both models capture the GRACE-measured direction of change, but differ from GRACE as well as each other in terms of the magnitude over different regions. A detailed analysis of the seasonal cycle of TWS variations over 30 river basins shows notable differences not only between models and GRACE but also among different GRACE products and between the two models. Further, it is found that while one model performs well in highly-managed river basins, it fails to reproduce the GRACE-observed signal in snow-dominated regions, and vice versa. The isolation of natural and human-induced changes in TWS in some of the managed basins reveals a consistently declining TWS trend during 2002-2010, however; significant differences are again obvious both between GRACE and models and among different GRACE products and models. Results from the decomposition of the TWS signal into the general trend and seasonality indicate that both models do not adequately capture both the trend and seasonality in the managed or snow-dominated basins implying that the TWS variations from a single model cannot be reliably used for all global regions. It is also found that the uncertainties arising from climate forcing datasets can introduce significant additional uncertainties, making direct comparison of model results and GRACE products even more difficult. Our results highlight the need to further improve the representation of human land-water management and snow processes in large-scale models to enable a reliable use of models and GRACE to study the changes in freshwater systems in all global regions.
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
2016-01-01
Abstract Ability of environmental stressors to induce transgenerational diseases has been experimentally demonstrated in plants, worms, fish, and mammals, indicating that exposures affect not only human health but also fish and ecosystem health. Small aquarium fish have been reliable model to study genetic and epigenetic basis of development and disease. Additionally, fish can also provide better, economic opportunity to study transgenerational inheritance of adverse health and epigenetic mechanisms. Molecular mechanisms underlying germ cell development in fish are comparable to those in mammals and humans. This review will provide a short overview of long-term effects of environmental chemical contaminant exposure in various models, associated epigenetic mechanisms, and a perspective on fish as model to study environmentally induced transgenerational inheritance of altered phenotypes. PMID:29492282
2016-03-01
A BOUNCE? A STUDY ON RESILIENCE AND HUMAN RELATIONS IN A HIGH RELIABILITY ORGANIZATION by Robert D. Johns March 2016 Thesis Advisor...RELATIONS IN A HIGH RELIABILITY ORGANIZATION 5. FUNDING NUMBERS 6. AUTHOR(S) Robert D. Johns 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...200 words) This study analyzes the various resilience factors associated with a military high reliability organization (HRO). The data measuring
The Use of Animal Models for Stroke Research: A Review
Casals, Juliana B; Pieri, Naira CG; Feitosa, Matheus LT; Ercolin, Anna CM; Roballo, Kelly CS; Barreto, Rodrigo SN; Bressan, Fabiana F; Martins, Daniele S; Miglino, Maria A; Ambrósio, Carlos E
2011-01-01
Stroke has been identified as the second leading cause of death worldwide. Stroke is a focal neurologic deficit caused by a change in cerebral circulation. The use of animal models in recent years has improved our understanding of the physiopathology of this disease. Rats and mice are the most commonly used stroke models, but the demand for larger models, such as rabbits and even nonhuman primates, is increasing so as to better understand the disease and its treatment. Although the basic mechanisms of stroke are nearly identical among mammals, we here discuss the differences between the human encephalon and various animals. In addition, we compare common surgical techniques used to induce animal models of stroke. A more complete anatomic knowledge of the cerebral vessels of various model species is needed to develop more reliable models for objective results that improve knowledge of the pathology of stroke in both human and veterinary medicine. PMID:22330245
Estimation of risks to children from exposure to airborne pollutants is often complicated by the lack of reliable epidemiological data specific to this age group. As a result, risks are generally estimated from extrapolations based on data obtained in other human age groups (e.g....
Strategies to improve electrode positioning and safety in cochlear implants.
Rebscher, S J; Heilmann, M; Bruszewski, W; Talbot, N H; Snyder, R L; Merzenich, M M
1999-03-01
An injection-molded internal supporting rib has been produced to control the flexibility of silicone rubber encapsulated electrodes designed to electrically stimulate the auditory nerve in human subjects with severe to profound hearing loss. The rib molding dies, and molds for silicone rubber encapsulation of the electrode, were designed and machined using AutoCad and MasterCam software packages in a PC environment. After molding, the prototype plastic ribs were iteratively modified based on observations of the performance of the rib/silicone composite insert in a clear plastic model of the human scala tympani cavity. The rib-based electrodes were reliably inserted farther into these models, required less insertion force and were positioned closer to the target auditory neural elements than currently available cochlear implant electrodes. With further design improvements the injection-molded rib may also function to accurately support metal stimulating contacts and wire leads during assembly to significantly increase the manufacturing efficiency of these devices. This method to reliably control the mechanical properties of miniature implantable devices with multiple electrical leads may be valuable in other areas of biomedical device design.
Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio
2012-01-01
The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered. PMID:22969386
Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio
2012-01-01
The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered.
Contemporary Animal Models For Human Gene Therapy Applications.
Gopinath, Chitra; Nathar, Trupti Job; Ghosh, Arkasubhra; Hickstein, Dennis Durand; Nelson, Everette Jacob Remington
2015-01-01
Over the past three decades, gene therapy has been making considerable progress as an alternative strategy in the treatment of many diseases. Since 2009, several studies have been reported in humans on the successful treatment of various diseases. Animal models mimicking human disease conditions are very essential at the preclinical stage before embarking on a clinical trial. In gene therapy, for instance, they are useful in the assessment of variables related to the use of viral vectors such as safety, efficacy, dosage and localization of transgene expression. However, choosing a suitable disease-specific model is of paramount importance for successful clinical translation. This review focuses on the animal models that are most commonly used in gene therapy studies, such as murine, canine, non-human primates, rabbits, porcine, and a more recently developed humanized mice. Though small and large animals both have their own pros and cons as disease-specific models, the choice is made largely based on the type and length of study performed. While small animals with a shorter life span could be well-suited for degenerative/aging studies, large animals with longer life span could suit longitudinal studies and also help with dosage adjustments to maximize therapeutic benefit. Recently, humanized mice or mouse-human chimaeras have gained interest in the study of human tissues or cells, thereby providing a more reliable understanding of therapeutic interventions. Thus, animal models are of great importance with regard to testing new vector technologies in vivo for assessing safety and efficacy prior to a gene therapy clinical trial.
Animal models of ischemic stroke and their application in clinical research.
Fluri, Felix; Schuhmann, Michael K; Kleinschnitz, Christoph
2015-01-01
This review outlines the most frequently used rodent stroke models and discusses their strengths and shortcomings. Mimicking all aspects of human stroke in one animal model is not feasible because ischemic stroke in humans is a heterogeneous disorder with a complex pathophysiology. The transient or permanent middle cerebral artery occlusion (MCAo) model is one of the models that most closely simulate human ischemic stroke. Furthermore, this model is characterized by reliable and well-reproducible infarcts. Therefore, the MCAo model has been involved in the majority of studies that address pathophysiological processes or neuroprotective agents. Another model uses thromboembolic clots and thus is more convenient for investigating thrombolytic agents and pathophysiological processes after thrombolysis. However, for many reasons, preclinical stroke research has a low translational success rate. One factor might be the choice of stroke model. Whereas the therapeutic responsiveness of permanent focal stroke in humans declines significantly within 3 hours after stroke onset, the therapeutic window in animal models with prompt reperfusion is up to 12 hours, resulting in a much longer action time of the investigated agent. Another major problem of animal stroke models is that studies are mostly conducted in young animals without any comorbidity. These models differ from human stroke, which particularly affects elderly people who have various cerebrovascular risk factors. Choosing the most appropriate stroke model and optimizing the study design of preclinical trials might increase the translational potential of animal stroke models.
Animal models of ischemic stroke and their application in clinical research
Fluri, Felix; Schuhmann, Michael K; Kleinschnitz, Christoph
2015-01-01
This review outlines the most frequently used rodent stroke models and discusses their strengths and shortcomings. Mimicking all aspects of human stroke in one animal model is not feasible because ischemic stroke in humans is a heterogeneous disorder with a complex pathophysiology. The transient or permanent middle cerebral artery occlusion (MCAo) model is one of the models that most closely simulate human ischemic stroke. Furthermore, this model is characterized by reliable and well-reproducible infarcts. Therefore, the MCAo model has been involved in the majority of studies that address pathophysiological processes or neuroprotective agents. Another model uses thromboembolic clots and thus is more convenient for investigating thrombolytic agents and pathophysiological processes after thrombolysis. However, for many reasons, preclinical stroke research has a low translational success rate. One factor might be the choice of stroke model. Whereas the therapeutic responsiveness of permanent focal stroke in humans declines significantly within 3 hours after stroke onset, the therapeutic window in animal models with prompt reperfusion is up to 12 hours, resulting in a much longer action time of the investigated agent. Another major problem of animal stroke models is that studies are mostly conducted in young animals without any comorbidity. These models differ from human stroke, which particularly affects elderly people who have various cerebrovascular risk factors. Choosing the most appropriate stroke model and optimizing the study design of preclinical trials might increase the translational potential of animal stroke models. PMID:26170628
Visual-search model observer for assessing mass detection in CT
NASA Astrophysics Data System (ADS)
Karbaschi, Zohreh; Gifford, Howard C.
2017-03-01
Our aim is to devise model observers (MOs) to evaluate acquisition protocols in medical imaging. To optimize protocols for human observers, an MO must reliably interpret images containing quantum and anatomical noise under aliasing conditions. In this study of sampling parameters for simulated lung CT, the lesion-detection performance of human observers was compared with that of visual-search (VS) observers, a channelized nonprewhitening (CNPW) observer, and a channelized Hoteling (CH) observer. Scans of a mathematical torso phantom modeled single-slice parallel-hole CT with varying numbers of detector pixels and angular projections. Circular lung lesions had a fixed radius. Twodimensional FBP reconstructions were performed. A localization ROC study was conducted with the VS, CNPW and human observers, while the CH observer was applied in a location-known ROC study. Changing the sampling parameters had negligible effect on the CNPW and CH observers, whereas several VS observers demonstrated a sensitivity to sampling artifacts that was in agreement with how the humans performed.
Reliability of human-supervised formant-trajectory measurement for forensic voice comparison.
Zhang, Cuiling; Morrison, Geoffrey Stewart; Ochoa, Felipe; Enzinger, Ewald
2013-01-01
Acoustic-phonetic approaches to forensic voice comparison often include human-supervised measurement of vowel formants, but the reliability of such measurements is a matter of concern. This study assesses the within- and between-supervisor variability of three sets of formant-trajectory measurements made by each of four human supervisors. It also assesses the validity and reliability of forensic-voice-comparison systems based on these measurements. Each supervisor's formant-trajectory system was fused with a baseline mel-frequency cepstral-coefficient system, and performance was assessed relative to the baseline system. Substantial improvements in validity were found for all supervisors' systems, but some supervisors' systems were more reliable than others.
Endsley, Mica R
2017-02-01
As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human-autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human-automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human-autonomy interfaces are presented and directions for future research discussed.
Neural Signatures of Trust During Human-Automation Interactions
2016-04-01
magnetic resonance imaging by manipulating the reliability of advice from a human or automated luggage inspector framed as experts. HAT and HHT were...human-human trust, human-automation trust, brain, functional magnetic resonance imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...behavioral X-ray luggage-screening task with functional magnetic resonance imaging (fMRI) and manipulated reliabilities of advice (unknown to the
Observing Consistency in Online Communication Patterns for User Re-Identification
Venter, Hein S.
2016-01-01
Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas. PMID:27918593
NASA Astrophysics Data System (ADS)
Ososky, Scott; Sanders, Tracy; Jentsch, Florian; Hancock, Peter; Chen, Jessie Y. C.
2014-06-01
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans' trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system's action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans' impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human-robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human-robot communication and human mental models regarding robots impact a human's ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.
Performance characteristics of a visual-search human-model observer with sparse PET image data
NASA Astrophysics Data System (ADS)
Gifford, Howard C.
2012-02-01
As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.
Statistical modelling of networked human-automation performance using working memory capacity.
Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja
2014-01-01
This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.
Reasoning, learning, and creativity: frontal lobe function and human decision-making.
Collins, Anne; Koechlin, Etienne
2012-01-01
The frontal lobes subserve decision-making and executive control--that is, the selection and coordination of goal-directed behaviors. Current models of frontal executive function, however, do not explain human decision-making in everyday environments featuring uncertain, changing, and especially open-ended situations. Here, we propose a computational model of human executive function that clarifies this issue. Using behavioral experiments, we show that unlike others, the proposed model predicts human decisions and their variations across individuals in naturalistic situations. The model reveals that for driving action, the human frontal function monitors up to three/four concurrent behavioral strategies and infers online their ability to predict action outcomes: whenever one appears more reliable than unreliable, this strategy is chosen to guide the selection and learning of actions that maximize rewards. Otherwise, a new behavioral strategy is tentatively formed, partly from those stored in long-term memory, then probed, and if competitive confirmed to subsequently drive action. Thus, the human executive function has a monitoring capacity limited to three or four behavioral strategies. This limitation is compensated by the binary structure of executive control that in ambiguous and unknown situations promotes the exploration and creation of new behavioral strategies. The results support a model of human frontal function that integrates reasoning, learning, and creative abilities in the service of decision-making and adaptive behavior.
Reasoning, Learning, and Creativity: Frontal Lobe Function and Human Decision-Making
Collins, Anne; Koechlin, Etienne
2012-01-01
The frontal lobes subserve decision-making and executive control—that is, the selection and coordination of goal-directed behaviors. Current models of frontal executive function, however, do not explain human decision-making in everyday environments featuring uncertain, changing, and especially open-ended situations. Here, we propose a computational model of human executive function that clarifies this issue. Using behavioral experiments, we show that unlike others, the proposed model predicts human decisions and their variations across individuals in naturalistic situations. The model reveals that for driving action, the human frontal function monitors up to three/four concurrent behavioral strategies and infers online their ability to predict action outcomes: whenever one appears more reliable than unreliable, this strategy is chosen to guide the selection and learning of actions that maximize rewards. Otherwise, a new behavioral strategy is tentatively formed, partly from those stored in long-term memory, then probed, and if competitive confirmed to subsequently drive action. Thus, the human executive function has a monitoring capacity limited to three or four behavioral strategies. This limitation is compensated by the binary structure of executive control that in ambiguous and unknown situations promotes the exploration and creation of new behavioral strategies. The results support a model of human frontal function that integrates reasoning, learning, and creative abilities in the service of decision-making and adaptive behavior. PMID:22479152
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane
2003-09-01
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less
An improved mounting device for attaching intracranial probes in large animal models.
Dunster, Kimble R
2015-12-01
The rigid support of intracranial probes can be difficult when using animal models, as mounting devices suitable for the probes are either not available, or designed for human use and not suitable in animal skulls. A cheap and reliable mounting device for securing intracranial probes in large animal models is described. Using commonly available clinical consumables, a universal mounting device for securing intracranial probes to the skull of large animals was developed and tested. A simply made mounting device to hold a variety of probes from 500 μm to 1.3 mm in diameter to the skull was developed. The device was used to hold probes to the skulls of sheep for up to 18 h. No adhesives or cements were used. The described device provides a reliable method of securing probes to the skull of animals.
Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning
Weisswange, Thomas H.; Rothkopf, Constantin A.; Rodemann, Tobias; Triesch, Jochen
2011-01-01
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference. PMID:21750717
Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making
Drugowitsch, Jan; Pouget, Alexandre
2012-01-01
Optimal binary perceptual decision making requires accumulation of evidence in the form of a probability distribution that specifies the probability of the choices being correct given the evidence so far. Reward rates can then be maximized by stopping the accumulation when the confidence about either option reaches a threshold. Behavioral and neuronal evidence suggests that humans and animals follow such a probabilitistic decision strategy, although its neural implementation has yet to be fully characterized. Here we show that that diffusion decision models and attractor network models provide an approximation to the optimal strategy only under certain circumstances. In particular, neither model type is sufficiently flexible to encode the reliability of both the momentary and the accumulated evidence, which is a pre-requisite to accumulate evidence of time-varying reliability. Probabilistic population codes, in contrast, can encode these quantities and, as a consequence, have the potential to implement the optimal strategy accurately. PMID:22884815
A HUMAN FACTORS META MODEL FOR U.S. NUCLEAR POWER PLANT CONTROL ROOM MODERNIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey C.
Over the last several years, the United States (U.S.) Department of Energy (DOE) has sponsored human factors research and development (R&D) and human factors engineering (HFE) activities through its Light Water Reactor Sustainability (LWRS) program to modernize the main control rooms (MCR) of commercial nuclear power plants (NPP). Idaho National Laboratory (INL), in partnership with numerous commercial nuclear utilities, has conducted some of this R&D to enable the life extension of NPPs (i.e., provide the technical basis for the long-term reliability, productivity, safety, and security of U.S. NPPs). From these activities performed to date, a human factors meta model formore » U.S. NPP control room modernization can now be formulated. This paper discusses this emergent HFE meta model for NPP control room modernization, with the goal of providing an integrated high level roadmap and guidance on how to perform human factors R&D and HFE for those in the U.S. nuclear industry that are engaging in the process of upgrading their MCRs.« less
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
Chen, Xing; Huang, Yu-An; You, Zhu-Hong; Yan, Gui-Ying; Wang, Xue-Song
2017-03-01
Accumulating clinical observations have indicated that microbes living in the human body are closely associated with a wide range of human noninfectious diseases, which provides promising insights into the complex disease mechanism understanding. Predicting microbe-disease associations could not only boost human disease diagnostic and prognostic, but also improve the new drug development. However, little efforts have been attempted to understand and predict human microbe-disease associations on a large scale until now. In this work, we constructed a microbe-human disease association network and further developed a novel computational model of KATZ measure for Human Microbe-Disease Association prediction (KATZHMDA) based on the assumption that functionally similar microbes tend to have similar interaction and non-interaction patterns with noninfectious diseases, and vice versa. To our knowledge, KATZHMDA is the first tool for microbe-disease association prediction. The reliable prediction performance could be attributed to the use of KATZ measurement, and the introduction of Gaussian interaction profile kernel similarity for microbes and diseases. LOOCV and k-fold cross validation were implemented to evaluate the effectiveness of this novel computational model based on known microbe-disease associations obtained from HMDAD database. As a result, KATZHMDA achieved reliable performance with average AUCs of 0.8130 ± 0.0054, 0.8301 ± 0.0033 and 0.8382 in 2-fold and 5-fold cross validation and LOOCV framework, respectively. It is anticipated that KATZHMDA could be used to obtain more novel microbes associated with important noninfectious human diseases and therefore benefit drug discovery and human medical improvement. Matlab codes and dataset explored in this work are available at http://dwz.cn/4oX5mS . xingchen@amss.ac.cn or zhuhongyou@gmail.com or wangxuesongcumt@163.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Next generation human skin constructs as advanced tools for drug development.
Abaci, H E; Guo, Zongyou; Doucet, Yanne; Jacków, Joanna; Christiano, Angela
2017-11-01
Many diseases, as well as side effects of drugs, manifest themselves through skin symptoms. Skin is a complex tissue that hosts various specialized cell types and performs many roles including physical barrier, immune and sensory functions. Therefore, modeling skin in vitro presents technical challenges for tissue engineering. Since the first attempts at engineering human epidermis in 1970s, there has been a growing interest in generating full-thickness skin constructs mimicking physiological functions by incorporating various skin components, such as vasculature and melanocytes for pigmentation. Development of biomimetic in vitro human skin models with these physiological functions provides a new tool for drug discovery, disease modeling, regenerative medicine and basic research for skin biology. This goal, however, has long been delayed by the limited availability of different cell types, the challenges in establishing co-culture conditions, and the ability to recapitulate the 3D anatomy of the skin. Recent breakthroughs in induced pluripotent stem cell (iPSC) technology and microfabrication techniques such as 3D-printing have allowed for building more reliable and complex in vitro skin models for pharmaceutical screening. In this review, we focus on the current developments and prevailing challenges in generating skin constructs with vasculature, skin appendages such as hair follicles, pigmentation, immune response, innervation, and hypodermis. Furthermore, we discuss the promising advances that iPSC technology offers in order to generate in vitro models of genetic skin diseases, such as epidermolysis bullosa and psoriasis. We also discuss how future integration of the next generation human skin constructs onto microfluidic platforms along with other tissues could revolutionize the early stages of drug development by creating reliable evaluation of patient-specific effects of pharmaceutical agents. Impact statement Skin is a complex tissue that hosts various specialized cell types and performs many roles including barrier, immune, and sensory functions. For human-relevant drug testing, there has been a growing interest in building more physiological skin constructs by incorporating different skin components, such as vasculature, appendages, pigment, innervation, and adipose tissue. This paper provides an overview of the strategies to build complex human skin constructs that can faithfully recapitulate human skin and thus can be used in drug development targeting skin diseases. In particular, we discuss recent developments and remaining challenges in incorporating various skin components, availability of iPSC-derived skin cell types and in vitro skin disease models. In addition, we provide insights on the future integration of these complex skin models with other organs on microfluidic platforms as well as potential readout technologies for high-throughput drug screening.
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.
Deterrence and transmission as mechanisms ensuring reliability of gossip.
Giardini, Francesca
2012-10-01
Spreading information about the members of one's group is one of the most universal human behaviors. Thanks to gossip, individuals can acquire the information about their peers without sustaining the burden of costly interactions with cheaters, but they can also create and revise social bonds. Gossip has also several positive functions at the group level, promoting cohesion and norm compliance. However, gossip can be unreliable, and can be used to damage others' reputation or to circulate false information, thus becoming detrimental to people involved and useless for the group. In this work, we propose a theoretical model in which reliability of gossip depends on the joint functioning of two distinct mechanisms. Thanks to the first, i.e., deterrence, individuals tend to avoid informational cheating because they fear punishment and the disruption of social bonds. On the other hand, transmission provides humans with the opportunity of reducing the consequences of cheating through a manipulation of the source of gossip.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
A comparison of computer-assisted and manual wound size measurement.
Thawer, Habiba A; Houghton, Pamela E; Woodbury, M Gail; Keast, David; Campbell, Karen
2002-10-01
Accurate and precise wound measurements are a critical component of every wound assessment. To examine the reliability and validity of a new computerized technique for measuring human and animal wounds, chronic human wounds (N = 45) and surgical animal wounds (N = 38) were assessed using manual and computerized techniques. Using intraclass correlation coefficients, intrarater and interrater reliability of surface area measurements obtained using the computerized technique were compared to those obtained using acetate tracings and planimetry. A single measurement of surface area using either technique produced excellent intrarater and interrater reliability for both human and animal wounds, but the computerized technique was more precise than the manual technique for measuring the surface area of animal wounds. For both types of wounds and measurement techniques, intrarater and interrater reliability improved when the average of three repeated measurements was obtained. The precision of each technique with human wounds and the precision of the manual technique with animal wounds also improved when three repeated measurement results were averaged. Concurrent validity between the two techniques was excellent for human wounds but poor for the smaller animal wounds, regardless of whether single or the average of three repeated surface area measurements was used. The computerized technique permits reliable and valid assessment of the surface area of both human and animal wounds.
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Hodkinson, Duncan J; Krause, Kristina; Khawaja, Nadine; Renton, Tara F; Huggins, John P; Vennart, William; Thacker, Michael A; Mehta, Mitul A; Zelaya, Fernando O; Williams, Steven C R; Howard, Matthew A
2013-01-01
Arterial spin labelling (ASL) is increasingly being applied to study the cerebral response to pain in both experimental human models and patients with persistent pain. Despite its advantages, scanning time and reliability remain important issues in the clinical applicability of ASL. Here we present the test-retest analysis of concurrent pseudo-continuous ASL (pCASL) and visual analogue scale (VAS), in a clinical model of on-going pain following third molar extraction (TME). Using ICC performance measures, we were able to quantify the reliability of the post-surgical pain state and ΔCBF (change in CBF), both at the group and individual case level. Within-subject, the inter- and intra-session reliability of the post-surgical pain state was ranked good-to-excellent (ICC > 0.6) across both pCASL and VAS modalities. The parameter ΔCBF (change in CBF between pre- and post-surgical states) performed reliably (ICC > 0.4), provided that a single baseline condition (or the mean of more than one baseline) was used for subtraction. Between-subjects, the pCASL measurements in the post-surgical pain state and ΔCBF were both characterised as reliable (ICC > 0.4). However, the subjective VAS pain ratings demonstrated a significant contribution of pain state variability, which suggests diminished utility for interindividual comparisons. These analyses indicate that the pCASL imaging technique has considerable potential for the comparison of within- and between-subjects differences associated with pain-induced state changes and baseline differences in regional CBF. They also suggest that differences in baseline perfusion and functional lateralisation characteristics may play an important role in the overall reliability of the estimated changes in CBF. Repeated measures designs have the important advantage that they provide good reliability for comparing condition effects because all sources of variability between subjects are excluded from the experimental error. The ability to elicit reliable neural correlates of on-going pain using quantitative perfusion imaging may help support the conclusions derived from subjective self-report.
Individual Differences in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. Joe; Ronald L. Boring
2014-06-01
While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research hasmore » shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.« less
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
Havas, K A; Boone, R B; Hill, A E; Salman, M D
2014-06-01
Brucellosis has been reported in livestock and humans in the country of Georgia with Brucella melitensis as the most common species causing disease. Georgia lacked sufficient data to assess effectiveness of the various potential control measures utilizing a reliable population-based simulation model of animal-to-human transmission of this infection. Therefore, an agent-based model was built using data from previous studies to evaluate the effect of an animal-level infection control programme on human incidence and sheep flock and cattle herd prevalence of brucellosis in the Kakheti region of Georgia. This model simulated the patterns of interaction of human-animal workers, sheep flocks and cattle herds with various infection control measures and returned population-based data. The model simulates the use of control measures needed for herd and flock prevalence to fall below 2%. As per the model output, shepherds had the greatest disease reduction as a result of the infection control programme. Cattle had the greatest influence on the incidence of human disease. Control strategies should include all susceptible animal species, sheep and cattle, identify the species of brucellosis present in the cattle population and should be conducted at the municipality level. This approach can be considered as a model to other countries and regions when assessment of control strategies is needed but data are scattered. © 2013 Blackwell Verlag GmbH.
USDA-ARS?s Scientific Manuscript database
Floods have negative impacts on society, causing damages in infrastructures and industry, and in the worst cases, causing loss of human lives. Thus early and accurate warning is crucial to significantly reduce the impacts on public safety and economy. Reliable flood warning can be generated using ...
Assessing the Quality of Academic Libraries on the Web: The Development and Testing of Criteria.
ERIC Educational Resources Information Center
Chao, Hungyune
2002-01-01
This study develops and tests an instrument useful for evaluating the quality of academic library Web sites. Discusses criteria for print materials and human-computer interfaces; user-based perspectives; the use of factor analysis; a survey of library experts; testing reliability through analysis of variance; and regression models. (Contains 53…
Small-Animal Molecular Imaging for Preclinical Cancer Research: .μPET and μ.SPECT.
Cuccurullo, Vincenzo; Di Stasio, Giuseppe D; Schillirò, Maria L; Mansi, Luigi
2016-01-01
Due to different sizes of humans and rodents, the performance of clinical imaging devices is not enough for a scientifically reliable evaluation in mice and rats; therefore dedicated small-animal systems with a much higher sensitivity and spatial resolution, compared to the ones used in humans, are required. Smallanimal imaging represents a cutting-edge research method able to approach an enormous variety of pathologies in which animal models of disease may be used to elucidate the mechanisms underlying the human condition and/or to allow a translational pharmacological (or other) evaluation of therapeutic tools. Molecular imaging, avoiding animal sacrifice, permits repetitive (i.e. longitudinal) studies on the same animal which becomes its own control. In this way also the over time evaluation of disease progression or of the treatment response is enabled. Many different rodent models have been applied to study almost all kind of human pathologies or to experiment a wide series of drugs and/or other therapeutic instruments. In particular, relevant information has been achieved in oncology by in vivo neoplastic phenotypes, obtained through procedures such as subcutaneous tumor grafts, surgical transplantation of solid tumor, orthotopic injection of tumor cells into specific organs/sites of interest, genetic modification of animals to promote tumor-genesis; in this way traditional or innovative treatments, also including gene therapy, of animals with a cancer induced by a known carcinogen may be experimented. Each model has its own disadvantage but, comparing different studies, it is possible to achieve a panoramic and therefore substantially reliable view on the specific subject. Small-animal molecular imaging has become an invaluable component of modern biomedical research that will gain probably an increasingly important role in the next few years.
Understanding the nature of wealth and its effects on human fitness
Mulder, Monique Borgerhoff; Beheim, Bret A.
2011-01-01
Studying fitness consequences of variable behavioural, physiological and cognitive traits in contemporary populations constitutes the specific contribution of human behavioural ecology to the study of human diversity. Yet, despite 30 years of evolutionary anthropological interest in the determinants of fitness, there exist few principled investigations of the diverse sources of wealth that might reveal selective forces during recent human history. To develop a more holistic understanding of how selection shapes human phenotypic traits, be these transmitted by genetic or cultural means, we expand the conventional focus on associations between socioeconomic status and fitness to three distinct types of wealth—embodied, material and relational. Using a model selection approach to the study of women's success in raising offspring in an African horticultural population (the Tanzanian Pimbwe), we find that the top performing models consistently include relational and material wealth, with embodied wealth as a less reliable predictor. Specifically, child mortality risk is increased with few household assets, parent nonresidency, child legitimacy, and one or more parents having been accused of witchcraft. The use of multiple models to test various hypotheses greatly facilitates systematic comparative analyses of human behavioural diversity in wealth accrual and investment across different kinds of societies. PMID:21199839
Developing Cognitive Models for Social Simulation from Survey Data
NASA Astrophysics Data System (ADS)
Alt, Jonathan K.; Lieberman, Stephen
The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.
Hartman, Matthew E; Dai, Dao-Fu; Laflamme, Michael A
2016-01-15
Human pluripotent stem cells (PSCs) represent an attractive source of cardiomyocytes with potential applications including disease modeling, drug discovery and safety screening, and novel cell-based cardiac therapies. Insights from embryology have contributed to the development of efficient, reliable methods capable of generating large quantities of human PSC-cardiomyocytes with cardiac purities ranging up to 90%. However, for human PSCs to meet their full potential, the field must identify methods to generate cardiomyocyte populations that are uniform in subtype (e.g. homogeneous ventricular cardiomyocytes) and have more mature structural and functional properties. For in vivo applications, cardiomyocyte production must be highly scalable and clinical grade, and we will need to overcome challenges including graft cell death, immune rejection, arrhythmogenesis, and tumorigenic potential. Here we discuss the types of human PSCs, commonly used methods to guide their differentiation into cardiomyocytes, the phenotype of the resultant cardiomyocytes, and the remaining obstacles to their successful translation. Copyright © 2015 Elsevier B.V. All rights reserved.
March, Sandra; Ramanan, Vyas; Trehan, Kartik; Ng, Shengyong; Galstian, Ani; Gural, Nil; Scull, Margaret A.; Shlomai, Amir; Mota, Maria; Fleming, Heather E.; Khetani, Salman R.; Rice, Charles M.; Bhatia, Sangeeta N.
2018-01-01
Studying human hepatotropic pathogens such as hepatitis B and C viruses and malaria will be necessary for understanding host-pathogen interactions, and developing therapy and prophylaxis. Unfortunately, existing in vitro liver models typically employ either cell lines that exhibit aberrant physiology, or primary human hepatocytes in culture configurations wherein they rapidly lose their hepatic functional phenotype. Stable, robust, and reliable in vitro primary human hepatocyte models are needed as platforms for infectious disease applications. For this purpose, we describe the application of micropatterned co-cultures (MPCCs), which consist of primary human hepatocytes organized into 2D islands that are surrounded by supportive cells. Using this system, we demonstrate how to recapitulate in vitro liver infection by the hepatitis B and C viruses and Plasmodium pathogens. In turn, the MPCC platform can be used to uncover aspects of host-pathogen interactions, and has the potential to be used for medium-throughput drug screening and vaccine development. PMID:26584444
Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A.
2015-01-01
We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1–22). The results of Bartlett's test of sphericity (χ 2 = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS. PMID:26090517
Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A
2015-01-01
We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1-22). The results of Bartlett's test of sphericity (χ(2) = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS.
NASA human factors programmatic overview
NASA Technical Reports Server (NTRS)
Connors, Mary M.
1992-01-01
Human factors addresses humans in their active and interactive capacities, i.e., in the mental and physical activities that they perform and in the contributions they make to achieving the goals of the mission. The overall goal of space human factors in NASA is to support the safety, productivity, and reliability of both the on-board crew and the ground support staff. Safety and reliability are fundamental requirements that human factors shares with other disciplines, while productivity represents the defining contribution of the human factors discipline.
Materials used to simulate physical properties of human skin.
Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M
2016-02-01
For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA-STD-7009 Guidance Document for Human Health and Performance Models and Simulations
NASA Technical Reports Server (NTRS)
Walton, Marlei; Mulugeta, Lealem; Nelson, Emily S.; Myers, Jerry G.
2014-01-01
Rigorous verification, validation, and credibility (VVC) processes are imperative to ensure that models and simulations (MS) are sufficiently reliable to address issues within their intended scope. The NASA standard for MS, NASA-STD-7009 (7009) [1] was a resultant outcome of the Columbia Accident Investigation Board (CAIB) to ensure MS are developed, applied, and interpreted appropriately for making decisions that may impact crew or mission safety. Because the 7009 focus is engineering systems, a NASA-STD-7009 Guidance Document is being developed to augment the 7009 and provide information, tools, and techniques applicable to the probabilistic and deterministic biological MS more prevalent in human health and performance (HHP) and space biomedical research and operations.
Equine recurrent uveitis--a spontaneous horse model of uveitis.
Deeg, Cornelia A; Hauck, Stefanie M; Amann, Barbara; Pompetzki, Dirk; Altmann, Frank; Raith, Albert; Schmalzl, Thomas; Stangassinger, Manfred; Ueffing, Marius
2008-01-01
Equine recurrent uveitis (ERU) is an autoimmune disease that occurs with a high prevalence (10%) in horses. ERU represents the only reliable spontaneous model for human autoimmune uveitis. We already identified and characterized novel autoantigens (malate dehydrogenase, recoverin, CRALBP) by analyzing the autoantibody-binding pattern of horses affected by spontaneous recurrent uveitis (ERU) to the retinal proteome. CRALBP also seems to be relevant to human autoimmune uveitis. Proteomic screening of vitreous and retinal samples from ERU diseased cases in comparison to healthy controls has led to the identification of a series of differentially regulated proteins, which are functionally linked to the immune system and the maintenance of the blood-retinal barrier. 2008 S. Karger AG, Basel.
Accessing key steps of human tumor progression in vivo by using an avian embryo model
NASA Astrophysics Data System (ADS)
Hagedorn, Martin; Javerzat, Sophie; Gilges, Delphine; Meyre, Aurélie; de Lafarge, Benjamin; Eichmann, Anne; Bikfalvi, Andreas
2005-02-01
Experimental in vivo tumor models are essential for comprehending the dynamic process of human cancer progression, identifying therapeutic targets, and evaluating antitumor drugs. However, current rodent models are limited by high costs, long experimental duration, variability, restricted accessibility to the tumor, and major ethical concerns. To avoid these shortcomings, we investigated whether tumor growth on the chick chorio-allantoic membrane after human glioblastoma cell grafting would replicate characteristics of the human disease. Avascular tumors consistently formed within 2 days, then progressed through vascular endothelial growth factor receptor 2-dependent angiogenesis, associated with hemorrhage, necrosis, and peritumoral edema. Blocking of vascular endothelial growth factor receptor 2 and platelet-derived growth factor receptor signaling pathways by using small-molecule receptor tyrosine kinase inhibitors abrogated tumor development. Gene regulation during the angiogenic switch was analyzed by oligonucleotide microarrays. Defined sample selection for gene profiling permitted identification of regulated genes whose functions are associated mainly with tumor vascularization and growth. Furthermore, expression of known tumor progression genes identified in the screen (IL-6 and cysteine-rich angiogenic inducer 61) as well as potential regulators (lumican and F-box-only 6) follow similar patterns in patient glioma. The model reliably simulates key features of human glioma growth in a few days and thus could considerably increase the speed and efficacy of research on human tumor progression and preclinical drug screening. angiogenesis | animal model alternatives | glioblastoma
A Passive System Reliability Analysis for a Station Blackout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunett, Acacia; Bucknor, Matthew; Grabaskas, David
2015-05-03
The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less
In-Vivo Human Skin to Textiles Friction Measurements
NASA Astrophysics Data System (ADS)
Pfarr, Lukas; Zagar, Bernhard
2017-10-01
We report on a measurement system to determine highly reliable and accurate friction properties of textiles as needed for example as input to garment simulation software. Our investigations led to a set-up that allows to characterize not just textile to textile but also textile to in-vivo human skin tribological properties and thus to fundamental knowledge about genuine wearer interaction in garments. The method of test conveyed in this paper is measuring concurrently and in a highly time resolved manner the normal force as well as the resulting shear force caused by a friction subject intending to slide out of the static friction regime and into the dynamic regime on a test bench. Deeper analysis of various influences is enabled by extending the simple model following Coulomb's law for rigid body friction to include further essential parameters such as contact force, predominance in the yarn's orientation and also skin hydration. This easy-to-use system enables to measure reliably and reproducibly both static and dynamic friction for a variety of friction partners including human skin with all its variability there might be.
Identification of the human factors contributing to maintenance failures in a petroleum operation.
Antonovsky, Ari; Pollock, Clare; Straker, Leon
2014-03-01
This research aimed to identify the most frequently occurring human factors contributing to maintenance-related failures within a petroleum industry organization. Commonality between failures will assist in understanding reliability in maintenance processes, thereby preventing accidents in high-hazard domains. Methods exist for understanding the human factors contributing to accidents. Their application in a maintenance context mainly has been advanced in aviation and nuclear power. Maintenance in the petroleum industry provides a different context for investigating the role that human factors play in influencing outcomes. It is therefore worth investigating the contributing human factors to improve our understanding of both human factors in reliability and the factors specific to this domain. Detailed analyses were conducted of maintenance-related failures (N = 38) in a petroleum company using structured interviews with maintenance technicians. The interview structure was based on the Human Factor Investigation Tool (HFIT), which in turn was based on Rasmussen's model of human malfunction. A mean of 9.5 factors per incident was identified across the cases investigated.The three most frequent human factors contributing to the maintenance failures were found to be assumption (79% of cases), design and maintenance (71%), and communication (66%). HFIT proved to be a useful instrument for identifying the pattern of human factors that recurred most frequently in maintenance-related failures. The high frequency of failures attributed to assumptions and communication demonstrated the importance of problem-solving abilities and organizational communication in a domain where maintenance personnel have a high degree of autonomy and a wide geographical distribution.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Okada, Morihiro; Miller, Thomas C; Roediger, Julia; Shi, Yun-Bo; Schech, Joseph Mat
2017-09-01
Various animal models are indispensible in biomedical research. Increasing awareness and regulations have prompted the adaptation of more humane approaches in the use of laboratory animals. With the development of easier and faster methodologies to generate genetically altered animals, convenient and humane methods to genotype these animals are important for research involving such animals. Here, we report skin swabbing as a simple and noninvasive method for extracting genomic DNA from mice and frogs for genotyping. We show that this method is highly reliable and suitable for both immature and adult animals. Our approach allows a simpler and more humane approach for genotyping vertebrate animals.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Complete Plasmodium falciparum liver-stage development in liver-chimeric mice
Vaughan, Ashley M.; Mikolajczak, Sebastian A.; Wilson, Elizabeth M.; Grompe, Markus; Kaushansky, Alexis; Camargo, Nelly; Bial, John; Ploss, Alexander; Kappe, Stefan H.I.
2012-01-01
Plasmodium falciparum, which causes the most lethal form of human malaria, replicates in the host liver during the initial stage of infection. However, in vivo malaria liver-stage (LS) studies in humans are virtually impossible, and in vitro models of LS development do not reconstitute relevant parasite growth conditions. To overcome these obstacles, we have adopted a robust mouse model for the study of P. falciparum LS in vivo: the immunocompromised and fumarylacetoacetate hydrolase–deficient mouse (Fah–/–, Rag2–/–, Il2rg–/–, termed the FRG mouse) engrafted with human hepatocytes (FRG huHep). FRG huHep mice supported vigorous, quantifiable P. falciparum LS development that culminated in complete maturation of LS at approximately 7 days after infection, providing a relevant model for LS development in humans. The infections allowed observations of previously unknown expression of proteins in LS, including P. falciparum translocon of exported proteins 150 (PTEX150) and exported protein-2 (EXP-2), components of a known parasite protein export machinery. LS schizonts exhibited exoerythrocytic merozoite formation and merosome release. Furthermore, FRG mice backcrossed to the NOD background and repopulated with huHeps and human red blood cells supported reproducible transition from LS infection to blood-stage infection. Thus, these mice constitute reliable models to study human LS directly in vivo and demonstrate utility for studies of LS–to–blood-stage transition of a human malaria parasite. PMID:22996664
Complete Plasmodium falciparum liver-stage development in liver-chimeric mice.
Vaughan, Ashley M; Mikolajczak, Sebastian A; Wilson, Elizabeth M; Grompe, Markus; Kaushansky, Alexis; Camargo, Nelly; Bial, John; Ploss, Alexander; Kappe, Stefan H I
2012-10-01
Plasmodium falciparum, which causes the most lethal form of human malaria, replicates in the host liver during the initial stage of infection. However, in vivo malaria liver-stage (LS) studies in humans are virtually impossible, and in vitro models of LS development do not reconstitute relevant parasite growth conditions. To overcome these obstacles, we have adopted a robust mouse model for the study of P. falciparum LS in vivo: the immunocompromised and fumarylacetoacetate hydrolase-deficient mouse (Fah-/-, Rag2-/-, Il2rg-/-, termed the FRG mouse) engrafted with human hepatocytes (FRG huHep). FRG huHep mice supported vigorous, quantifiable P. falciparum LS development that culminated in complete maturation of LS at approximately 7 days after infection, providing a relevant model for LS development in humans. The infections allowed observations of previously unknown expression of proteins in LS, including P. falciparum translocon of exported proteins 150 (PTEX150) and exported protein-2 (EXP-2), components of a known parasite protein export machinery. LS schizonts exhibited exoerythrocytic merozoite formation and merosome release. Furthermore, FRG mice backcrossed to the NOD background and repopulated with huHeps and human red blood cells supported reproducible transition from LS infection to blood-stage infection. Thus, these mice constitute reliable models to study human LS directly in vivo and demonstrate utility for studies of LS-to-blood-stage transition of a human malaria parasite.
Darwinism and ethology. The role of natural selection in animals and humans.
Gervet, J; Soleilhavoup, M
1997-11-01
The role of behaviour in biological evolution is examined within the context of Darwinism. All Darwinian models are based on the distinction of two mechanisms: one that permits faithful transmission of a feature from one generation to another, and another that differentially regulates the degree of this transmission. Behaviour plays a minimal role as an agent of transmission in the greater part of the animal kingdom; by contrast, the forms it may assume strongly influence the mechanisms of selection regulating the different rates of transmission. We consider the decisive feature of the human species to be the existence of a phenotypical system of cultural coding characterized by precision and reliability which are the distinctive feature of genetic coding in animals. We examine the consequences for the application of the Darwinian model to human history.
Old and new news about single-photon sensitivity in human vision
NASA Astrophysics Data System (ADS)
Nelson, Philip
It is sometimes said that ``our eyes can see single photons,'' when in fact the faintest flash of light that can reliably be reported by human subjects is closer to 100 photons. Nevertheless, there is a sense in which the familiar claim is true. Experiments conducted long after the seminal work of Hecht, Shlaer, and Pirenne in two distinct realms, those of human psychophysics and single-cell physiology, now admit a more precisem conclusion to be drawn about our visual apparatus. Finding a single framework that accommodates both kinds of result is a nontrivial challenge, and one that sets severe quantitative constraints on any model of dim-light visual processing. I will present one such model and compare it to a recent experiment. Partially supported by the NSF under Grants EF-0928048 and DMR-0832802.
Rong, Hao; Tian, Jin
2015-05-01
The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.
NASA Technical Reports Server (NTRS)
DeMott, Diana
2013-01-01
Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
Second-Order Conditioning of Human Causal Learning
ERIC Educational Resources Information Center
Jara, Elvia; Vila, Javier; Maldonado, Antonio
2006-01-01
This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…
Spertzel, R O
1989-12-01
The search for a model of HIV infection continues. While much of the initial work focussed on animal models of AIDS, more recent efforts have sought animal models of HIV infection in which one or more signs of AIDS may be reproduced. Most initial small animal modelling efforts were negative and many such efforts remain unpublished. In 1988, the Public Health Service (PHS) AIDS Animal Model Committee conducted a survey among PHS agencies to identify published and unpublished data on animal models of HIV. To date, the chimpanzee is the only animal to be reliably infected with HIV albeit without development of signs and symptoms normally associated with human AIDS. One recent study has shown the gibbon to be similarly susceptible to infection with HIV. Mice carrying a chimera of elements of the human immune system have been shown to support the growth of HIV and F1 progeny of transgenic mice containing intact copies of HIV proviral DNA, have developed a disease that resembles some aspects of human AIDS. Rabbits, baboons and rhesus monkeys have also been shown to be infected under certain conditions and/or with selected strains of HIV but again without the development of AIDS symptomatology. This report briefly summarizes published and available unpublished data on these efforts to develop an animal model of HIV infection.
Forecasting infectious disease emergence subject to seasonal forcing.
Miller, Paige B; O'Dea, Eamon B; Rohani, Pejman; Drake, John M
2017-09-06
Despite high vaccination coverage, many childhood infections pose a growing threat to human populations. Accurate disease forecasting would be of tremendous value to public health. Forecasting disease emergence using early warning signals (EWS) is possible in non-seasonal models of infectious diseases. Here, we assessed whether EWS also anticipate disease emergence in seasonal models. We simulated the dynamics of an immunizing infectious pathogen approaching the tipping point to disease endemicity. To explore the effect of seasonality on the reliability of early warning statistics, we varied the amplitude of fluctuations around the average transmission. We proposed and analyzed two new early warning signals based on the wavelet spectrum. We measured the reliability of the early warning signals depending on the strength of their trend preceding the tipping point and then calculated the Area Under the Curve (AUC) statistic. Early warning signals were reliable when disease transmission was subject to seasonal forcing. Wavelet-based early warning signals were as reliable as other conventional early warning signals. We found that removing seasonal trends, prior to analysis, did not improve early warning statistics uniformly. Early warning signals anticipate the onset of critical transitions for infectious diseases which are subject to seasonal forcing. Wavelet-based early warning statistics can also be used to forecast infectious disease.
Rainfall Induced Landslides in Puerto Rico (Invited)
NASA Astrophysics Data System (ADS)
Lepore, C.; Kamal, S.; Arnone, E.; Noto, V.; Shanahan, P.; Bras, R. L.
2009-12-01
Landslides are a major geologic hazard in the United States, typically triggered by rainfall, earthquakes, volcanoes and human activity. Rainfall-induced landslides are the most common type in the island of Puerto Rico, with one or two large events per year. We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model capable of simulating landslides, tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. .Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides, which is used to predict the dynamic susceptibility of the basin to landslides.
Predicting enhancer activity and variant impact using gkm-SVM.
Beer, Michael A
2017-09-01
We participated in the Critical Assessment of Genome Interpretation eQTL challenge to further test computational models of regulatory variant impact and their association with human disease. Our prediction model is based on a discriminative gapped-kmer SVM (gkm-SVM) trained on genome-wide chromatin accessibility data in the cell type of interest. The comparisons with massively parallel reporter assays (MPRA) in lymphoblasts show that gkm-SVM is among the most accurate prediction models even though all other models used the MPRA data for model training, and gkm-SVM did not. In addition, we compare gkm-SVM with other MPRA datasets and show that gkm-SVM is a reliable predictor of expression and that deltaSVM is a reliable predictor of variant impact in K562 cells and mouse retina. We further show that DHS (DNase-I hypersensitive sites) and ATAC-seq (assay for transposase-accessible chromatin using sequencing) data are equally predictive substrates for training gkm-SVM, and that DHS regions flanked by H3K27Ac and H3K4me1 marks are more predictive than DHS regions alone. © 2017 Wiley Periodicals, Inc.
Validation of A Global Hydrological Model
NASA Astrophysics Data System (ADS)
Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.
Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated due to the precipitation mea- surement errors. Even though the explicit modeling of wetlands and lakes leads to a much improved modeling of both the vertical water balance and the lateral transport of water, not enough information is included in WGHM to accurately capture the hy- drology of these water bodies. Certainly, the reliability of model results is highest at the locations at which WGHM was calibrated. The validation indicates that reliability for cells inside calibrated basins is satisfactory if the basin is relatively homogeneous. Analyses of the few available stations outside of calibrated basins indicate a reason- ably high model reliability, particularly in humid regions.
The Role of Human Error in Design, Construction, and Reliability of Marine Structures.
1994-10-01
The 1979 Three Mile Island nuclear plant accident was largely a result of a failure to properly sort out and recognize critically important information...determinating the goals and objectives of the program and by evaluating and interpreting the results in terms of structural design, construction, and...67 Checking Models in Structural Design ....................................... 69 Nuclear Power Plants
The cognitive cost of sleep lost
McCoy, John G.; Strecker, Robert E.
2013-01-01
A substantial body of literature supports the intuitive notion that a good night’s sleep can facilitate human cognitive performance the next day. Deficits in attention, learning & memory, emotional reactivity, and higher-order cognitive processes, such as executive function and decision making, have all been documented following sleep disruption in humans. Thus, whilst numerous clinical and experimental studies link human sleep disturbance to cognitive deficits, attempts to develop valid and reliable rodent models of these phenomena are fewer, and relatively more recent. This review focuses primarily on the cognitive impairments produced by sleep disruption in rodent models of several human patterns of sleep loss/sleep disturbance. Though not an exclusive list, this review will focus on four specific types of sleep disturbance: total sleep deprivation, experimental sleep fragmentation, selective REM sleep deprivation, and chronic sleep restriction. The use of rodent models can provide greater opportunities to understand the neurobiological changes underlying sleep loss induced cognitive impairments. Thus, this review concludes with a description of recent neurobiological findings concerning the neuroplastic changes and putative brain mechanisms that may underlie the cognitive deficits produced by sleep disturbances. PMID:21875679
Yi, Liang; Zhou, Chun; Wang, Bing; Chen, Tunan; Xu, Minhui; Xu, Lunshan; Feng, Hua
2013-08-01
Recent studies have demonstrated that inflammatory cells and inflammatory mediators are indispensable components of the tumor-initiating cell (TIC) niche and regulate the malignant behavior of TICs. However, conventional animal models for glioma-initiating cell (GIC) studies are based on the implantation of GICs from human glioblastoma (GBM) into immunodeficient mice without the regulation of immune system. Whether animal models can mimic the cellular microenvironment of malignancy and evaluate the biological features of GICs accurately is unclear. Here, we detected the biological features of neurosphere-like tumor cells derived from the murine GBM cell line GL261 (GL261-NS) and from primary human GBM (PGBM-NS) in vitro, injected GL261-NS into syngeneic C57/BL6 mouse brain and injected PGBM-NS into NOD/SCID mouse brain, respectively. The tumorigenic characteristics of the two different orthotopic transplantation models were analyzed and the histological discrepancy between grafts and human primary GBM was compared. We found that GICs enriched in GL261-NS, GL261-NS and PGBM-NS exhibited increased GIC potential and enhanced chemoresistance in vitro. GL261-NS was significantly more aggressive compared to GL261 adhesive cells (GL261-AC) in vivo and the enhanced aggression was more significant in syngeneic mice compared to immunodeficient mice. The discrepancy of tumorigenicity between GL261-NS and GL261-AC in C57/BL6 mice was also larger compared to that between PGBM-NS and PGBM-AC in immunodeficient mice. Syngrafts derived from GL261-NS in C57/BL6 mice corresponded to the human GBM histologically better, compared with xenografts derived from PGBM-NS in NOD/SCID mice, which lack inflammatory cells and inflammatory mediators. We conclude that the inflammatory niche is involved in the tumorigenicity of GICs and implantation of GL261-NS into C57/BL6 mice is a more reliable syngeneic graft model for in vivo study on GICs relative to the immunodeficiency model.
Reis, Gabriela Barreto Dos; Andrade-Vieira, Larissa Fonseca; Moraes, Isabella de Campos; César, Pedro Henrique Souza; Marcussi, Silvana; Davide, Lisete Chamma
2017-08-01
Comet assay is an efficient test to detect genotoxic compounds based on observation of DNA damage. The aim of this work was to compare the results obtained from the comet assay in two different type of cells extracted from the root tips from Lactuca sativa L. and human blood. For this, Spent Pot Liner (SPL), and its components (aluminum and fluoride) were applied as toxic agents. SPL is a solid waste generated in industry from the aluminum mining and processing with known toxicity. Three concentrations of all tested solutions were applied and the damages observed were compared to negative and positive controls. It was observed an increase in the frequency of DNA damage for human leukocytes and plant cells, in all treatments. On human leukocytes, SPL induced the highest percentage of damage, with an average of 87.68%. For root tips cells of L. sativa the highest percentage of damage was detected for aluminum (93.89%). Considering the arbitrary units (AU), the average of nuclei with high levels of DNA fragmentation was significant for both cells type evaluated. The tested cells demonstrated equal effectiveness for detection of the genotoxicity induced by the SPL and its chemical components, aluminum and fluoride. Further, using a unique method, the comet assay, we proved that cells from root tips of Lactuca sativa represent a reliable model to detect DNA damage induced by genotoxic pollutants is in agreement of those observed in human leukocytes as model. So far, plant cells may be suggested as important system to assess the toxicological risk of environmental agents. Copyright © 2017 Elsevier Inc. All rights reserved.
Abubshait, Abdulaziz; Wiese, Eva
2017-01-01
Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.
Khoshnevis, Mehrdad; Carozzo, Claude; Bonnefont-Rebeix, Catherine; Belluco, Sara; Leveneur, Olivia; Chuzel, Thomas; Pillet-Michelland, Elodie; Dreyfus, Matthieu; Roger, Thierry; Berger, François; Ponce, Frédérique
2017-04-15
Glioblastoma is the most common and deadliest primary brain tumor for humans. Despite many efforts toward the improvement of therapeutic methods, prognosis is poor and the disease remains incurable with a median survival of 12-14.5 months after an optimal treatment. To develop novel treatment modalities for this fatal disease, new devices must be tested on an ideal animal model before performing clinical trials in humans. A new model of induced glioblastoma in Yucatan minipigs was developed. Nine immunosuppressed minipigs were implanted with the U87 human glioblastoma cell line in both the left and right hemispheres. Computed tomography (CT) acquisitions were performed once a week to monitor tumor growth. Among the 9 implanted animals, 8 minipigs showed significant macroscopic tumors on CT acquisitions. Histological examination of the brain after euthanasia confirmed the CT imaging findings with the presence of an undifferentiated glioma. Yucatan minipig, given its brain size and anatomy (gyrencephalic structure) which are comparable to humans, provides a reliable brain tumor model for preclinical studies of different therapeutic METHODS: in realistic conditions. Moreover, the short development time, the lower cyclosporine and caring cost and the compatibility with the size of commercialized stereotactic frames make it an affordable and practical animal model, especially in comparison with large breed pigs. This reproducible glioma model could simulate human anatomical conditions in preclinical studies and facilitate the improvement of novel therapeutic devices, designed at the human scale from the outset. Copyright © 2017 Elsevier B.V. All rights reserved.
Cooperation and dialogical modeling for designing a safe Human space exploration mission to Mars
NASA Astrophysics Data System (ADS)
Grès, Stéphane; Tognini, Michel; Le Cardinal, Gilles; Zalila, Zyed; Gueydan, Guillaume
2014-11-01
This paper proposes an approach for a complex and innovative project requiring international contributions from different communities of knowledge and expertise. Designing a safe and reliable architecture for a manned mission to Mars or the Asteroids necessitates strong cooperation during the early stages of design to prevent and reduce risks for the astronauts at each step of the mission. The stake during design is to deal with the contradictions, antagonisms and paradoxes of the involved partners for the definition and modeling of a shared project of reference. As we see in our research which analyses the cognitive and social aspects of technological risks in major accidents, in such a project, the complexity of the global organization (during design and use) and the integration of a wide and varie d range of sciences and innovative technologies is likely to increase systemic risks as follows: human and cultural mistakes, potential defaults, failures and accidents. We identify as the main danger antiquated centralized models of organization and the operational limits of interdisciplinarity in the sciences. Beyond this, we can see that we need to take carefully into account human cooperation and the quality of relations between heterogeneous partners. Designing an open, self-learning and reliable exploration system able to self-adapt in dangerous and unforeseen situations implies a collective networked intelligence led by a safe process that organizes interaction between the actors and the aims of the project. Our work, supported by the CNES (French Space Agency), proposes an innovative approach to the coordination of a complex project.
Cheng, Yuanyuan; Nathanail, Paul C
2009-12-20
Generic Assessment Criteria (GAC) are derived using widely applicable assumptions about the characteristics and behaviour of contaminant sources, pathways and receptors. GAC provide nationally consistent guidance, thereby saving money and time. Currently, there are no human health based Generic Assessment Criteria (GAC) for contaminated sites in China. Protection of human health is therefore difficult to ensure and demonstrate; and the lack of GAC makes it difficult to tell if there is potential significant risk to human health unless site-specific criteria are derived. This paper derived Chinese GAC (GAC) for five inorganic and eight organic substances for three regions in China for three land uses: urban residential without plant uptake, Chinese cultivated land, and commercial/industrial using the SNIFFER model. The SNIFFER model has been further implemented with a dermal absorption algorithm and the model default input values have been changed to reflect the Chinese exposure scenarios. It is envisaged that the modified SNIFFER model could be used to derive GAC for more contaminants, more Regions, and more land uses. Further research to enhance the reliability and acceptability of the GAC is needed in regional/national surveys in diet and working patterns.
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
Identification of the contribution of the ankle and hip joints to multi-segmental balance control
2013-01-01
Background Human stance involves multiple segments, including the legs and trunk, and requires coordinated actions of both. A novel method was developed that reliably estimates the contribution of the left and right leg (i.e., the ankle and hip joints) to the balance control of individual subjects. Methods The method was evaluated using simulations of a double-inverted pendulum model and the applicability was demonstrated with an experiment with seven healthy and one Parkinsonian participant. Model simulations indicated that two perturbations are required to reliably estimate the dynamics of a double-inverted pendulum balance control system. In the experiment, two multisine perturbation signals were applied simultaneously. The balance control system dynamic behaviour of the participants was estimated by Frequency Response Functions (FRFs), which relate ankle and hip joint angles to joint torques, using a multivariate closed-loop system identification technique. Results In the model simulations, the FRFs were reliably estimated, also in the presence of realistic levels of noise. In the experiment, the participants responded consistently to the perturbations, indicated by low noise-to-signal ratios of the ankle angle (0.24), hip angle (0.28), ankle torque (0.07), and hip torque (0.33). The developed method could detect that the Parkinson patient controlled his balance asymmetrically, that is, the right ankle and hip joints produced more corrective torque. Conclusion The method allows for a reliable estimate of the multisegmental feedback mechanism that stabilizes stance, of individual participants and of separate legs. PMID:23433148
A Novel Rodent Model of Posterior Ischemic Optic Neuropathy
Wang, Yan; Brown, Dale P.; Duan, Yuanli; Kong, Wei; Watson, Brant D.; Goldberg, Jeffrey L.
2014-01-01
Objectives To develop a reliable, reproducible rat model of posterior ischemic optic neuropathy (PION) and study the cellular responses in the optic nerve and retina. Methods Posterior ischemic optic neuropathy was induced in adult rats by photochemically induced ischemia. Retinal and optic nerve vasculature was examined by fluorescein isothiocyanate–dextran extravasation. Tissue sectioning and immunohistochemistry were used to investigate the pathologic changes. Retinal ganglion cell survival at different times after PION induction, with or without neurotrophic application, was quantified by fluorogold retrograde labeling. Results Optic nerve injury was confirmed after PION induction, including local vascular leakage, optic nerve edema, and cavernous degeneration. Immunostaining data revealed microglial activation and focal loss of astrocytes, with adjacent astrocytic hypertrophy. Up to 23%, 50%, and 70% retinal ganglion cell loss was observed at 1 week, 2 weeks, and 3 weeks, respectively, after injury compared with a sham control group. Experimental treatment by brain-derived neurotrophic factor and ciliary neurotrophic factor remarkably prevented retinal ganglion cell loss in PION rats. At 3 weeks after injury, more than 40% of retinal ganglion cells were saved by the application of neurotrophic factors. Conclusions Rat PION created by photochemically induced ischemia is a reproducible and reliable animal model for mimicking the key features of human PION. Clinical Relevance The correspondence between the features of this rat PION model to those of human PION makes it an ideal model to study the pathophysiologic course of the disease, most of which remains to be elucidated. Furthermore, it provides an optimal model for testing therapeutic approaches for optic neuropathies. PMID:23544206
Application of chimeric mice with humanized liver for study of human-specific drug metabolism.
Bateman, Thomas J; Reddy, Vijay G B; Kakuni, Masakazu; Morikawa, Yoshio; Kumar, Sanjeev
2014-06-01
Human-specific or disproportionately abundant human metabolites of drug candidates that are not adequately formed and qualified in preclinical safety assessment species pose an important drug development challenge. Furthermore, the overall metabolic profile of drug candidates in humans is an important determinant of their drug-drug interaction susceptibility. These risks can be effectively assessed and/or mitigated if human metabolic profile of the drug candidate could reliably be determined in early development. However, currently available in vitro human models (e.g., liver microsomes, hepatocytes) are often inadequate in this regard. Furthermore, the conduct of definitive radiolabeled human ADME studies is an expensive and time-consuming endeavor that is more suited for later in development when the risk of failure has been reduced. We evaluated a recently developed chimeric mouse model with humanized liver on uPA/SCID background for its ability to predict human disposition of four model drugs (lamotrigine, diclofenac, MRK-A, and propafenone) that are known to exhibit human-specific metabolism. The results from these studies demonstrate that chimeric mice were able to reproduce the human-specific metabolite profile for lamotrigine, diclofenac, and MRK-A. In the case of propafenone, however, the human-specific metabolism was not detected as a predominant pathway, and the metabolite profiles in native and humanized mice were similar; this was attributed to the presence of residual highly active propafenone-metabolizing mouse enzymes in chimeric mice. Overall, the data indicate that the chimeric mice with humanized liver have the potential to be a useful tool for the prediction of human-specific metabolism of xenobiotics and warrant further investigation.
Simulator of human visual perception
NASA Astrophysics Data System (ADS)
Bezzubik, Vitalii V.; Belashenkov, Nickolai R.
2016-04-01
Difference of Circs (DoC) model allowing to simulate the response of neurons - ganglion cells as a reaction to stimuli is represented and studied in relation with representation of receptive fields of human retina. According to this model the response of neurons is reduced to execution of simple arithmetic operations and the results of these calculations well correlate with experimental data in wide range of stimuli parameters. The simplicity of the model and reliability of reproducing of responses allow to propose the conception of a device which can simulate the signals generated by ganglion cells as a reaction to presented stimuli. The signals produced according to DoC model are considered as a result of primary processing of information received from receptors independently of their type and may be sent to higher levels of nervous system of living creatures for subsequent processing. Such device may be used as a prosthesis for disabled organ.
Modelling of the Human Knee Joint Supported by Active Orthosis
NASA Astrophysics Data System (ADS)
Musalimov, V.; Monahov, Y.; Tamre, M.; Rõbak, D.; Sivitski, A.; Aryassov, G.; Penkov, I.
2018-02-01
The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC). The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.
Kell, Douglas B.; Goodacre, Royston
2014-01-01
Metabolism represents the ‘sharp end’ of systems biology, because changes in metabolite concentrations are necessarily amplified relative to changes in the transcriptome, proteome and enzyme activities, which can be modulated by drugs. To understand such behaviour, we therefore need (and increasingly have) reliable consensus (community) models of the human metabolic network that include the important transporters. Small molecule ‘drug’ transporters are in fact metabolite transporters, because drugs bear structural similarities to metabolites known from the network reconstructions and from measurements of the metabolome. Recon2 represents the present state-of-the-art human metabolic network reconstruction; it can predict inter alia: (i) the effects of inborn errors of metabolism; (ii) which metabolites are exometabolites, and (iii) how metabolism varies between tissues and cellular compartments. However, even these qualitative network models are not yet complete. As our understanding improves so do we recognise more clearly the need for a systems (poly)pharmacology. PMID:23892182
An, Guohua; Widness, John A; Mock, Donald M; Veng-Pedersen, Peter
2016-09-01
Direct measurement of red blood cell (RBC) survival in humans has improved from the original accurate but limited differential agglutination technique to the current reliable, safe, and accurate biotin method. Despite this, all of these methods are time consuming and require blood sampling over several months to determine the RBC lifespan. For situations in which RBC survival information must be obtained quickly, these methods are not suitable. With the exception of adults and infants, RBC survival has not been extensively investigated in other age groups. To address this need, we developed a novel, physiology-based mathematical model that quickly estimates RBC lifespan in healthy individuals at any age. The model is based on the assumption that the total number of RBC recirculations during the lifespan of each RBC (denoted by N max) is relatively constant for all age groups. The model was initially validated using the data from our prior infant and adult biotin-labeled red blood cell studies and then extended to the other age groups. The model generated the following estimated RBC lifespans in 2-year-old, 5-year-old, 8-year-old, and 10-year-old children: 62, 74, 82, and 86 days, respectively. We speculate that this model has useful clinical applications. For example, HbA1c testing is not reliable in identifying children with diabetes because HbA1c is directly affected by RBC lifespan. Because our model can estimate RBC lifespan in children at any age, corrections to HbA1c values based on the model-generated RBC lifespan could improve diabetes diagnosis as well as therapy in children.
Systematic Reviews of Animal Models: Methodology versus Epistemology
Greek, Ray; Menache, Andre
2013-01-01
Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions. PMID:23372426
Pantic, Boris; Borgia, Doriana; Giunco, Silvia; Malena, Adriana; Kiyono, Tohru; Salvatori, Sergio; De Rossi, Anita; Giardina, Emiliano; Sangiuolo, Federica; Pegoraro, Elena; Vergani, Lodovica; Botta, Annalisa
2016-03-01
Primary human skeletal muscle cells (hSkMCs) are invaluable tools for deciphering the basic molecular mechanisms of muscle-related biological processes and pathological alterations. Nevertheless, their use is quite restricted due to poor availability, short life span and variable purity of the cells during in vitro culture. Here, we evaluate a recently published method of hSkMCs immortalization, relying on ectopic expression of cyclin D1 (CCND1), cyclin-dependent kinase 4 (CDK4) and telomerase (TERT) in myoblasts from healthy donors (n=3) and myotonic dystrophy type 1 (DM1) patients (n=2). The efficacy to maintain the myogenic and non-transformed phenotype, as well as the main pathogenetic hallmarks of DM1, has been assessed. Combined expression of the three genes i) maintained the CD56(NCAM)-positive myoblast population and differentiation potential; ii) preserved the non-transformed phenotype and iii) maintained the CTG repeat length, amount of nuclear foci and aberrant alternative splicing in immortal muscle cells. Moreover, immortal hSkMCs displayed attractive additional features such as structural maturation of sarcomeres, persistence of Pax7-positive cells during differentiation and complete disappearance of nuclear foci following (CAG)7 antisense oligonucleotide (ASO) treatment. Overall, the CCND1, CDK4 and TERT immortalization yields versatile, reliable and extremely useful human muscle cell models to investigate the basic molecular features of human muscle cell biology, to elucidate the molecular pathogenetic mechanisms and to test new therapeutic approaches for DM1 in vitro. Copyright © 2016 Elsevier Inc. All rights reserved.
Sarvi, Majid
2017-01-01
Introduction Understanding collective behavior of moving organisms and how interactions between individuals govern their collective motion has triggered a growing number of studies. Similarities have been observed between the scale-free behavioral aspects of various systems (i.e. groups of fish, ants, and mammals). Investigation of such connections between the collective motion of non-human organisms and that of humans however, has been relatively scarce. The problem demands for particular attention in the context of emergency escape motion for which innovative experimentation with panicking ants has been recently employed as a relatively inexpensive and non-invasive approach. However, little empirical evidence has been provided as to the relevance and reliability of this approach as a model of human behaviour. Methods This study explores pioneer experiments of emergency escape to tackle this question and to connect two forms of experimental observations that investigate the collective movement at macroscopic level. A large number of experiments with human and panicking ants are conducted representing the escape behavior of these systems in crowded spaces. The experiments share similar architectural structures in which two streams of crowd flow merge with one another. Measures such as discharge flow rates and the probability distribution of passage headways are extracted and compared between the two systems. Findings Our findings displayed an unexpected degree of similarity between the collective patterns emerged from both observation types, particularly based on aggregate measures. Experiments with ants and humans commonly indicated how significantly the efficiency of motion and the rate of discharge depend on the architectural design of the movement environment. Practical applications Our findings contribute to the accumulation of evidence needed to identify the boarders of applicability of experimentation with crowds of non-human entities as models of human collective motion as well as the level of measurements (i.e. macroscopic or microscopic) and the type of contexts at which reliable inferences can be drawn. This particularly has implications in the context of experimenting evacuation behaviour for which recruiting human subjects may face ethical restrictions. The findings, at minimum, offer promise as to the potential benefit of piloting such experiments with non-human crowds, thereby forming better-informed hypotheses. PMID:28854221
Shahhoseini, Zahra; Sarvi, Majid
2017-01-01
Understanding collective behavior of moving organisms and how interactions between individuals govern their collective motion has triggered a growing number of studies. Similarities have been observed between the scale-free behavioral aspects of various systems (i.e. groups of fish, ants, and mammals). Investigation of such connections between the collective motion of non-human organisms and that of humans however, has been relatively scarce. The problem demands for particular attention in the context of emergency escape motion for which innovative experimentation with panicking ants has been recently employed as a relatively inexpensive and non-invasive approach. However, little empirical evidence has been provided as to the relevance and reliability of this approach as a model of human behaviour. This study explores pioneer experiments of emergency escape to tackle this question and to connect two forms of experimental observations that investigate the collective movement at macroscopic level. A large number of experiments with human and panicking ants are conducted representing the escape behavior of these systems in crowded spaces. The experiments share similar architectural structures in which two streams of crowd flow merge with one another. Measures such as discharge flow rates and the probability distribution of passage headways are extracted and compared between the two systems. Our findings displayed an unexpected degree of similarity between the collective patterns emerged from both observation types, particularly based on aggregate measures. Experiments with ants and humans commonly indicated how significantly the efficiency of motion and the rate of discharge depend on the architectural design of the movement environment. Our findings contribute to the accumulation of evidence needed to identify the boarders of applicability of experimentation with crowds of non-human entities as models of human collective motion as well as the level of measurements (i.e. macroscopic or microscopic) and the type of contexts at which reliable inferences can be drawn. This particularly has implications in the context of experimenting evacuation behaviour for which recruiting human subjects may face ethical restrictions. The findings, at minimum, offer promise as to the potential benefit of piloting such experiments with non-human crowds, thereby forming better-informed hypotheses.
Salamanna, Francesca; Borsari, Veronica; Brogini, Silvia; Giavaresi, Gianluca; Parrilli, Annapaola; Cepollaro, Simona; Cadossi, Matteo; Martini, Lucia; Mazzotti, Antonio; Fini, Milena
2016-11-22
One of the main limitations, when studying cancer-bone metastasis, is the complex nature of the native bone environment and the lack of reliable, simple, inexpensive models that closely mimic the biological processes occurring in patients and allowing the correct translation of results. To enhance the understanding of the mechanisms underlying human bone metastases and in order to find new therapies, we developed an in vitro three-dimensional (3D) cancer-bone metastasis model by culturing human breast or prostate cancer cells with human bone tissue isolated from female and male patients, respectively. Bone tissue discarded from total hip replacement surgery was cultured in a rolling apparatus system in a normoxic or hypoxic environment. Gene expression profile, protein levels, histological, immunohistochemical and four-dimensional (4D) micro-CT analyses showed a noticeable specificity of breast and prostate cancer cells for bone colonization and ingrowth, thus highlighting the species-specific and sex-specific osteotropism and the need to widen the current knowledge on cancer-bone metastasis spread in human bone tissues. The results of this study support the application of this model in preclinical studies on bone metastases and also follow the 3R principles, the guiding principles, aimed at replacing/reducing/refining (3R) animal use and their suffering for scientific purposes.
Salamanna, Francesca; Borsari, Veronica; Brogini, Silvia; Giavaresi, Gianluca; Parrilli, Annapaola; Cepollaro, Simona; Cadossi, Matteo; Martini, Lucia; Mazzotti, Antonio; Fini, Milena
2016-01-01
One of the main limitations, when studying cancer-bone metastasis, is the complex nature of the native bone environment and the lack of reliable, simple, inexpensive models that closely mimic the biological processes occurring in patients and allowing the correct translation of results. To enhance the understanding of the mechanisms underlying human bone metastases and in order to find new therapies, we developed an in vitro three-dimensional (3D) cancer-bone metastasis model by culturing human breast or prostate cancer cells with human bone tissue isolated from female and male patients, respectively. Bone tissue discarded from total hip replacement surgery was cultured in a rolling apparatus system in a normoxic or hypoxic environment. Gene expression profile, protein levels, histological, immunohistochemical and four-dimensional (4D) micro-CT analyses showed a noticeable specificity of breast and prostate cancer cells for bone colonization and ingrowth, thus highlighting the species-specific and sex-specific osteotropism and the need to widen the current knowledge on cancer-bone metastasis spread in human bone tissues. The results of this study support the application of this model in preclinical studies on bone metastases and also follow the 3R principles, the guiding principles, aimed at replacing/reducing/refining (3R) animal use and their suffering for scientific purposes. PMID:27765913
Miyamoto, Maki; Iwasaki, Shinji; Chisaki, Ikumi; Nakagawa, Sayaka; Amano, Nobuyuki; Hirabayashi, Hideki
2017-12-01
1. The aim of the present study was to evaluate the usefulness of chimeric mice with humanised liver (PXB mice) for the prediction of clearance (CL t ) and volume of distribution at steady state (Vd ss ), in comparison with monkeys, which have been reported as a reliable model for human pharmacokinetics (PK) prediction, and with rats, as a conventional PK model. 2. CL t and Vd ss values in PXB mice, monkeys and rats were determined following intravenous administration of 30 compounds known to be mainly eliminated in humans via the hepatic metabolism by various drug-metabolising enzymes. Using single-species allometric scaling, human CL t and Vd ss values were predicted from the three animal models. 3. Predicted CL t values from PXB mice exhibited the highest predictability: 25 for PXB mice, 21 for monkeys and 14 for rats were predicted within a three-fold range of actual values among 30 compounds. For predicted human Vd ss values, the number of compounds falling within a three-fold range was 23 for PXB mice, 24 for monkeys, and 16 for rats among 29 compounds. PXB mice indicated a higher predictability for CL t and Vd ss values than the other animal models. 4. These results demonstrate the utility of PXB mice in predicting human PK parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. Joe; Diego Mandelli; Ronald L. Boring
2015-07-01
The United States Department of Energy is sponsoring the Light Water Reactor Sustainability program, which has the overall objective of supporting the near-term and the extended operation of commercial nuclear power plants. One key research and development (R&D) area in this program is the Risk-Informed Safety Margin Characterization pathway, which combines probabilistic risk simulation with thermohydraulic simulation codes to define and manage safety margins. The R&D efforts to date, however, have not included robust simulations of human operators, and how the reliability of human performance or lack thereof (i.e., human errors) can affect risk-margins and plant performance. This paper describesmore » current and planned research efforts to address the absence of robust human reliability simulations and thereby increase the fidelity of simulated accident scenarios.« less
Predictive model of muscle fatigue after spinal cord injury in humans.
Shields, Richard K; Chang, Ya-Ju; Dudley-Javoroski, Shauna; Lin, Cheng-Hsiang
2006-07-01
The fatigability of paralyzed muscle limits its ability to deliver physiological loads to paralyzed extremities during repetitive electrical stimulation. The purposes of this study were to determine the reliability of measuring paralyzed muscle fatigue and to develop a model to predict the temporal changes in muscle fatigue that occur after spinal cord injury (SCI). Thirty-four subjects underwent soleus fatigue testing with a modified Burke electrical stimulation fatigue protocol. The between-day reliability of this protocol was high (intraclass correlation, 0.96). We fit the fatigue index (FI) data to a quadratic-linear segmental polynomial model. FI declined rapidly (0.3854 per year) for the first 1.7 years, and more slowly (0.01 per year) thereafter. The rapid decline of FI immediately after SCI implies that a "window of opportunity" exists for the clinician if the goal is to prevent these changes. Understanding the timing of change in muscle endurance properties (and, therefore, load-generating capacity) after SCI may assist clinicians when developing therapeutic interventions to maintain musculoskeletal integrity.
A Systems Modeling Approach for Risk Management of Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila
2012-01-01
The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.
Multisensory decisions provide support for probabilistic number representations.
Kanitscheider, Ingmar; Brown, Amanda; Pouget, Alexandre; Churchland, Anne K
2015-06-01
A large body of evidence suggests that an approximate number sense allows humans to estimate numerosity in sensory scenes. This ability is widely observed in humans, including those without formal mathematical training. Despite this, many outstanding questions remain about the nature of the numerosity representation in the brain. Specifically, it is not known whether approximate numbers are represented as scalar estimates of numerosity or, alternatively, as probability distributions over numerosity. In the present study, we used a multisensory decision task to distinguish these possibilities. We trained human subjects to decide whether a test stimulus had a larger or smaller numerosity compared with a fixed reference. Depending on the trial, the numerosity was presented as either a sequence of visual flashes or a sequence of auditory tones, or both. To test for a probabilistic representation, we varied the reliability of the stimulus by adding noise to the visual stimuli. In accordance with a probabilistic representation, we observed a significant improvement in multisensory compared with unisensory trials. Furthermore, a trial-by-trial analysis revealed that although individual subjects showed strategic differences in how they leveraged auditory and visual information, all subjects exploited the reliability of unisensory cues. An alternative, nonprobabilistic model, in which subjects combined cues without regard for reliability, was not able to account for these trial-by-trial choices. These findings provide evidence that the brain relies on a probabilistic representation for numerosity decisions. Copyright © 2015 the American Physiological Society.
Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E
2017-02-13
Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Vairis, Achilles; Stefanoudakis, George; Petousis, Markos; Vidakis, Nectarios; Tsainis, Andreas-Marios; Kandyla, Betina
2016-02-01
The human knee joint has a three-dimensional geometry with multiple body articulations that produce complex mechanical responses under loads that occur in everyday life and sports activities. Understanding the complex mechanical interactions of these load-bearing structures is of use when the treatment of relevant diseases is evaluated and assisting devices are designed. The anterior cruciate ligament (ACL) in the knee is one of four main ligaments that connects the femur to the tibia and is often torn during sudden twisting motions, resulting in knee instability. The objective of this work is to study the mechanical behavior of the human knee joint and evaluate the differences in its response for three different states, i.e., intact, ACL-deficient, and surgically treated (reconstructed) knee. The finite element models corresponding to these states were developed. For the reconstructed model, a novel repair device was developed and patented by the author in previous work. Static load cases were applied, as have already been presented in a previous work, in order to compare the calculated results produced by the two models the ACL-deficient and the surgically reconstructed knee joint, under the exact same loading conditions. Displacements were calculated in different directions for the load cases studied and were found to be very close to those from previous modeling work and were in good agreement with experimental data presented in literature. The developed finite element model for both the intact and the ACL-deficient human knee joint is a reliable tool to study the kinematics of the human knee, as results of this study show. In addition, the reconstructed human knee joint model had kinematic behavior similar to the intact knee joint, showing that such reconstruction devices can restore human knee stability to an adequate extent.
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
NASA Astrophysics Data System (ADS)
Su, Long-Jyun; Wu, Meng-Shiue; Hui, Yuen Yung; Chang, Be-Ming; Pan, Lei; Hsu, Pei-Chen; Chen, Yit-Tsong; Ho, Hong-Nerng; Huang, Yen-Hua; Ling, Thai-Yen; Hsu, Hsao-Hsun; Chang, Huan-Cheng
2017-03-01
Cell therapy is a promising strategy for the treatment of human diseases. While the first use of cells for therapeutic purposes can be traced to the 19th century, there has been a lack of general and reliable methods to study the biodistribution and associated pharmacokinetics of transplanted cells in various animal models for preclinical evaluation. Here, we present a new platform using albumin-conjugated fluorescent nanodiamonds (FNDs) as biocompatible and photostable labels for quantitative tracking of human placenta choriodecidual membrane-derived mesenchymal stem cells (pcMSCs) in miniature pigs by magnetic modulation. With this background-free detection technique and time-gated fluorescence imaging, we have been able to precisely determine the numbers as well as positions of the transplanted FND-labeled pcMSCs in organs and tissues of the miniature pigs after intravenous administration. The method is applicable to single-cell imaging and quantitative tracking of human stem/progenitor cells in rodents and other animal models as well.
Kurtova, Antonina V.; Balakrishnan, Kumudha; Chen, Rong; Ding, Wei; Schnabl, Susanne; Quiroga, Maite P.; Sivina, Mariela; Wierda, William G.; Estrov, Zeev; Keating, Michael J.; Shehata, Medhat; Jäger, Ulrich; Gandhi, Varsha; Kay, Neil E.; Plunkett, William
2009-01-01
Marrow stromal cells (MSCs) provide important survival and drug resistance signals to chronic lymphocytic leukemia (CLL) cells, but current models to analyze CLL–MSC interactions are heterogeneous. Therefore, we tested different human and murine MSC lines and primary human MSCs for their ability to protect CLL cells from spontaneous and drug-induced apoptosis. Our results show that both human and murine MSCs are equally effective in protecting CLL cells from fludarabine-induced apoptosis. This protective effect was sustained over a wide range of CLL–MSC ratios (5:1 to 100:1), and the levels of protection were reproducible in 4 different laboratories. Human and murine MSCs also protected CLL cells from dexamethasone- and cyclophosphamide-induced apoptosis. This protection required cell–cell contact and was virtually absent when CLL cells were separated from the MSCs by micropore filters. Furthermore, MSCs maintained Mcl-1 and protected CLL cells from spontaneous and fludarabine-induced Mcl-1 and PARP cleavage. Collectively, these studies define common denominators for CLL cocultures with MSCs. They also provide a reliable, validated tool for future investigations into the mechanism of MSC–CLL cross talk and for drug testing in a more relevant fashion than the commonly used suspension cultures. PMID:19762485
Closing Report for NASA Cooperative Agreement NASA-1-242
NASA Technical Reports Server (NTRS)
Maung, Khin Maung
1999-01-01
Reliable estimates of exposures due to ionizing radiations are of paramount importance in achieving human exploration and development of space, and in several technologically important and scientifically significant areas impacting on industrial and public health. For proper assessment of radiation exposures reliable transport codes are needed. An essential input to the transport codes is the information about the interaction of ions and neutrons with the matter. Most of the information about this interaction is put in by nuclear cross section data. In order to obtain an accurate parameterization of cross sections data, theoretical input is indispensable especially for the processes where there is little or no experimental data available. In the grant period reliable data base was developed and a phenomenological model was developed for the total absorption cross sections valid for any charged/uncharged light, medium and heavy collision pairs valid for the entire energy range. It is gratifying to note the success of the model. The cross sections model has been adopted and is in use in NASA cosmic ray detector development projects, the radiation protection and shielding programs and several DoE laboratories and institutions. A list of the publications based on the work done during the grant period is given below and a sample copy of one of the papers is enclosed with this report.
Developing a model for hospital inherent safety assessment: Conceptualization and validation.
Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed
2018-01-01
Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.
Quality assessment of a new surgical simulator for neuroendoscopic training.
Filho, Francisco Vaz Guimarães; Coelho, Giselle; Cavalheiro, Sergio; Lyra, Marcos; Zymberg, Samuel T
2011-04-01
Ideal surgical training models should be entirely reliable, atoxic, easy to handle, and, if possible, low cost. All available models have their advantages and disadvantages. The choice of one or another will depend on the type of surgery to be performed. The authors created an anatomical model called the S.I.M.O.N.T. (Sinus Model Oto-Rhino Neuro Trainer) Neurosurgical Endotrainer, which can provide reliable neuroendoscopic training. The aim in the present study was to assess both the quality of the model and the development of surgical skills by trainees. The S.I.M.O.N.T. is built of a synthetic thermoretractable, thermosensible rubber called Neoderma, which, combined with different polymers, produces more than 30 different formulas. Quality assessment of the model was based on qualitative and quantitative data obtained from training sessions with 9 experienced and 13 inexperienced neurosurgeons. The techniques used for evaluation were face validation, retest and interrater reliability, and construct validation. The experts considered the S.I.M.O.N.T. capable of reproducing surgical situations as if they were real and presenting great similarity with the human brain. Surgical results of serial training showed that the model could be considered precise. Finally, development and improvement in surgical skills by the trainees were observed and considered relevant to further training. It was also observed that the probability of any single error was dramatically decreased after each training session, with a mean reduction of 41.65% (range 38.7%-45.6%). Neuroendoscopic training has some specific requirements. A unique set of instruments is required, as is a model that can resemble real-life situations. The S.I.M.O.N.T. is a new alternative model specially designed for this purpose. Validation techniques followed by precision assessments attested to the model's feasibility.
Interviewer as instrument: accounting for human factors in evaluation research.
Brown, Joel H
2006-04-01
This methodological study examines an original data collection model designed to incorporate human factors and enhance data richness in qualitative and evaluation research. Evidence supporting this model is drawn from in-depth youth and adult interviews in one of the largest policy/program evaluations undertaken in the United States, the Drug, Alcohol, and Tobacco Education evaluation (77 districts, 118 schools). When applying the explicit observation technique (EOT)--the strategic and nonjudgmental disclosure of nonverbal human factor cues by the interviewer to the respondent during interview--data revealed the observation disclosure pattern. Here, respondents linked perceptions with policy or program implementation or effectiveness evidence. Although more research is needed, it is concluded that the EOT yields richer data when compared with traditional semistructured interviews and, thus, holds promise to enhance qualitative and evaluation research methods. Validity and reliability as well as qualitative and evaluation research considerations are discussed.
Modeling of interactions of electromagnetic fields with human bodies
NASA Astrophysics Data System (ADS)
Caputa, Krzysztof
Interactions of electromagnetic fields with the human body have been a subject of scientific interest and public concern. In recent years, issues in power line field effects and those of wireless telephones have been in the forefront of research. Engineering research compliments biological investigations by quantifying the induced fields in biological bodies due to exposure to external fields. The research presented in this thesis aims at providing reliable tools, and addressing some of the unresolved issues related to interactions with the human body of power line fields and fields produced by handheld wireless telephones. The research comprises two areas, namely development of versatile models of the human body and their visualisation, and verification and application of numerical codes to solve selected problems of interest. The models of the human body, which are based on the magnetic resonance scans of the body, are unique and differ considerably from other models currently available. With the aid of computer software developed, the models can be arranged to different postures, and medical devices can be accurately placed inside them. A previously developed code for modeling interactions of power line fields with biological bodies has been verified by rigorous, quantitative inter-laboratory comparison for two human body models. This code has been employed to model electromagnetic interference (EMI) of the magnetic field with implanted cardiac pacemakers. In this case, the correct placement and representation of the pacemaker leads are critical, as simplified computations have been shown to result in significant errors. In modeling interactions of wireless communication devices, the finite difference time domain technique (FDTD) has become a de facto standard. The previously developed code has been verified by comparison with the analytical solution for a conductive sphere. While previously researchers limited their verifications to principal axes of the sphere, a global (volumetric) fields evaluation allowed for identification of locations of errors due to staircasing, and the singularities responsible for them. In evaluation of safety of cellular telephones and similar devices, the specific absorption rate (SAR) averaged over a 1 g (in North America) or 10 g (in Europe) cube is used. A new algorithm has been developed and tested, which allows for automatic and reliable identification of the maximum value with a user-selected inclusion of air (if required). This algorithm and the verified code have been used to model performance of a commercial telephone in the proximity of head, and to model EMI of this phone with a hearing aid placed in the ear canal. The modeling results, which relied on a proper representation of the antenna consisting of two helices and complex shape and structure of the telephone case, have been confirmed by measurements performed in another laboratory. Similarly, the EMI modeling has been in agreement with acoustic measurements (performed elsewhere). The latter comparison has allowed to confirm anticipated mechanism of the EMI.
Statistical modelling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1991-01-01
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
A survey of quality measures for gray-scale image compression
NASA Technical Reports Server (NTRS)
Eskicioglu, Ahmet M.; Fisher, Paul S.
1993-01-01
Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.
Human Support Issues and Systems for the Space Exploration Initiative: Results from Project Outreach
1991-01-01
that human factors were responsible for mission failure more often than equipment factors. Spacecraft habitability and ergonomics also require more...substantial challenges for designing reliable, flexible joints and dexterous, reliable gloves. Submission #100701 dealt with the ergonomics of work...perception that human factors deals primarily with cockpit displays and ergonomics . The success of long-duration missions will be highly dependent on
Higuchi, Tamami; Yokobori, Takehiko; Naito, Tomoharu; Kakinuma, Chihaya; Hagiwara, Shinji; Nishiyama, Masahiko; Asao, Takayuki
2018-01-01
Prognosis of pancreatic cancer is poor, thus the development of novel therapeutic drugs is necessary. During preclinical studies, appropriate models are essential for evaluating drug efficacy. The present study sought to determine the ideal pancreatic cancer mouse model for reliable preclinical testing. Such a model could accurately reflect human pancreatic cancer phenotypes and predict future clinical trial results. Systemic pathology analysis was performed in an orthotopic transplantation model to prepare model mice for use in preclinical studies, mimicking the progress of human pancreatic cancer. The location and the timing of inoculated cancer cell metastases, pathogenesis and cause of fatality were analyzed. Furthermore, the efficacy of gemcitabine, a key pancreatic cancer drug, was evaluated in this model where liver metastasis and peritoneal dissemination occur. Results indicated that the SUIT-2 orthotopic pancreatic cancer model was similar to the phenotypic sequential progression of human pancreatic cancer, with extra-pancreatic invasion, intra-peritoneal dissemination and other hematogenous organ metastases. Notably, survival was prolonged by administering gemcitabine to mice with metastasized pancreatic cancer. Furthermore, the detailed effects of gemcitabine on the primary tumor and metastatic tumor lesions were pathologically evaluated in mice. The present study indicated the model accurately depicted pancreatic cancer development and metastasis. Furthermore, the detailed effects of pancreatic cancer drugs on the primary tumor and on metastatic tumor lesions. We present this model as a potential new standard for new drug development in pancreatic cancer. PMID:29435042
A mathematical model of diurnal variations in human plasma melatonin levels
NASA Technical Reports Server (NTRS)
Brown, E. N.; Choe, Y.; Shanahan, T. L.; Czeisler, C. A.
1997-01-01
Studies in animals and humans suggest that the diurnal pattern in plasma melatonin levels is due to the hormone's rates of synthesis, circulatory infusion and clearance, circadian control of synthesis onset and offset, environmental lighting conditions, and error in the melatonin immunoassay. A two-dimensional linear differential equation model of the hormone is formulated and is used to analyze plasma melatonin levels in 18 normal healthy male subjects during a constant routine. Recently developed Bayesian statistical procedures are used to incorporate correctly the magnitude of the immunoassay error into the analysis. The estimated parameters [median (range)] were clearance half-life of 23.67 (14.79-59.93) min, synthesis onset time of 2206 (1940-0029), synthesis offset time of 0621 (0246-0817), and maximum N-acetyltransferase activity of 7.17(2.34-17.93) pmol x l(-1) x min(-1). All were in good agreement with values from previous reports. The difference between synthesis offset time and the phase of the core temperature minimum was 1 h 15 min (-4 h 38 min-2 h 43 min). The correlation between synthesis onset and the dim light melatonin onset was 0.93. Our model provides a more physiologically plausible estimate of the melatonin synthesis onset time than that given by the dim light melatonin onset and the first reliable means of estimating the phase of synthesis offset. Our analysis shows that the circadian and pharmacokinetics parameters of melatonin can be reliably estimated from a single model.
Predicting space telerobotic operator training performance from human spatial ability assessment
NASA Astrophysics Data System (ADS)
Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan
2013-11-01
Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.
Evaluation of Human Reliability in Selected Activities in the Railway Industry
NASA Astrophysics Data System (ADS)
Sujová, Erika; Čierna, Helena; Molenda, Michał
2016-09-01
The article focuses on evaluation of human reliability in the human - machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher`s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors` work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher's risk factors that affect his/her work and the evaluated seriousness of its influence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of unclear and complicated internal regulations and work processes, a feeling of being overworked, fear for one's safety at small, insufficiently protected stations.
Pre-university Chemistry Students in a Mimicked Scholarly Peer Review
NASA Astrophysics Data System (ADS)
van Rens, Lisette; Hermarij, Philip; Pilot, Albert; Beishuizen, Jos; Hofman, Herman; Wal, Marjolein
2014-10-01
Peer review is a significant component in scientific research. Introducing peer review into inquiry processes may be regarded as an aim to develop student understanding regarding quality in inquiries. This study examines student understanding in inquiry peer reviews among pre-university chemistry students, aged 16-17, when they enact a design of a mimicked scholarly peer review. This design is based on a model of a human activity system. Twenty-five different schools in Brazil, Germany, Poland and The Netherlands participated. The students (n = 880) conducted in small groups (n = 428) open inquiries on fermentation. All groups prepared an inquiry report for peer review. These reports were published on a website. Groups were randomly paired in an internet symposium, where they posted review comments to their peers. These responses were qualitatively analyzed on small groups' level of understanding regarding seven categories: inquiry question, hypothesis, management of control variables, accurate measurement, presenting results, reliability of results, discussion and conclusion. The mimicked scholarly review prompted a collective practice. Student understanding was significantly well on presenting results, discussion and conclusion, and significantly less on inquiry question and reliability of results. An enacted design, based on a model of a human activity system, created student understanding of quality in inquiries as well as an insight in a peer-reviewing practice. To what extent this model can be applied in a broader context of design research in science education needs further study.
Paleoenvironmental evidence for first human colonization of the eastern Caribbean
NASA Astrophysics Data System (ADS)
Siegel, Peter E.; Jones, John G.; Pearsall, Deborah M.; Dunning, Nicholas P.; Farrell, Pat; Duncan, Neil A.; Curtis, Jason H.; Singh, Sushant K.
2015-12-01
Identifying and dating first human colonization of new places is challenging, especially when group sizes were small and material traces of their occupations were ephemeral. Generating reliable reconstructions of human colonization patterns from intact archaeological sites may be difficult to impossible given post-depositional taphonomic processes and in cases of island and coastal locations the inundation of landscapes resulting from post-Pleistocene sea-level rise. Paleoenvironmental reconstruction is proving to be a more reliable method of identifying small-scale human colonization events than archaeological data alone. We demonstrate the method through a sediment-coring project across the Lesser Antilles and southern Caribbean. Paleoenvironmental data were collected informing on the timing of multiple island-colonization events and land-use histories spanning the full range of human occupations in the Caribbean, from the initial forays into the islands through the arrival and eventual domination of the landscapes and indigenous people by Europeans. In some areas, our data complement archaeological, paleoecological, and historical findings from the Lesser Antilles and in others amplify understanding of colonization history. Here, we highlight data relating to the timing and process of initial colonization in the eastern Caribbean. In particular, paleoenvironmental data from Trinidad, Grenada, Martinique, and Marie-Galante (Guadeloupe) provide a basis for revisiting initial colonization models of the Caribbean. We conclude that archaeological programs addressing human occupations dating to the early to mid-Holocene, especially in dynamic coastal settings, should systematically incorporate paleoenvironmental investigations.
Morfeld, Peter; Bruch, Joachim; Levy, Len; Ngiewih, Yufanyi; Chaudhuri, Ishrat; Muranko, Henry J; Myerson, Ross; McCunney, Robert J
2015-04-23
We analyze the scientific basis and methodology used by the German MAK Commission in their recommendations for exposure limits and carcinogen classification of "granular biopersistent particles without known specific toxicity" (GBS). These recommendations are under review at the European Union level. We examine the scientific assumptions in an attempt to reproduce the results. MAK's human equivalent concentrations (HECs) are based on a particle mass and on a volumetric model in which results from rat inhalation studies are translated to derive occupational exposure limits (OELs) and a carcinogen classification. We followed the methods as proposed by the MAK Commission and Pauluhn 2011. We also examined key assumptions in the metrics, such as surface area of the human lung, deposition fractions of inhaled dusts, human clearance rates; and risk of lung cancer among workers, presumed to have some potential for lung overload, the physiological condition in rats associated with an increase in lung cancer risk. The MAK recommendations on exposure limits for GBS have numerous incorrect assumptions that adversely affect the final results. The procedures to derive the respirable occupational exposure limit (OEL) could not be reproduced, a finding raising considerable scientific uncertainty about the reliability of the recommendations. Moreover, the scientific basis of using the rat model is confounded by the fact that rats and humans show different cellular responses to inhaled particles as demonstrated by bronchoalveolar lavage (BAL) studies in both species. Classifying all GBS as carcinogenic to humans based on rat inhalation studies in which lung overload leads to chronic inflammation and cancer is inappropriate. Studies of workers, who have been exposed to relevant levels of dust, have not indicated an increase in lung cancer risk. Using the methods proposed by the MAK, we were unable to reproduce the OEL for GBS recommended by the Commission, but identified substantial errors in the models. Considerable shortcomings in the use of lung surface area, clearance rates, deposition fractions; as well as using the mass and volumetric metrics as opposed to the particle surface area metric limit the scientific reliability of the proposed GBS OEL and carcinogen classification.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessment of Personality as a Requirement for Next Generation Ship Optimal Manning
2012-09-01
Department of Test and Evaluation FFG Frigate Guided Missile FFM Five Factor Model FY HRO Fiscal Year High Reliability Organization HSI Human... FFM ) to classify personality and their associated scales provided a renewed foundation for personality trait research (Digman, 1990). Costa and...McCrae’s (1992) FFM of personality traits (openness, conscientiousness, extraversion, agreeableness, and emotional stability) has developed into the
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.15 - Management evaluation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Management evaluation. 712.15 Section 712.15 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... workplace substance abuse program for DOE contractor employees, and DOE Order 3792.3, “Drug-Free Federal...
10 CFR 712.18 - Transferring HRP certification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Transferring HRP certification. 712.18 Section 712.18 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.18 Transferring HRP certification. (a) For HRP certification to be...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Applicability. 712.2 Section 712.2 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program General Provisions § 712.2 Applicability. The HRP applies to all applicants for, or current employees of...
10 CFR 712.22 - Hearing officer's report and recommendation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Hearing officer's report and recommendation. 712.22 Section 712.22 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.22 Hearing officer's report and recommendation. Within...
10 CFR 712.16 - DOE security review.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false DOE security review. 712.16 Section 712.16 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... part. (c) Any mental/personality disorder or behavioral issues found in a personnel security file...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2012-01-01 2012-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2013-01-01 2013-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2010-01-01 2010-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2011-01-01 2011-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.10 - Designation of HRP positions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... duties or has responsibility for working with, protecting, or transporting nuclear explosives, nuclear... 10 Energy 4 2014-01-01 2014-01-01 false Designation of HRP positions. 712.10 Section 712.10 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
10 CFR 712.17 - Instructional requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Instructional requirements. 712.17 Section 712.17 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability... responding to behavioral change and aberrant or unusual behavior that may result in a risk to national...
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Human Induced Pluripotent Stem Cell-Derived Macrophages for Unraveling Human Macrophage Biology.
Zhang, Hanrui; Reilly, Muredach P
2017-11-01
Despite a substantial appreciation for the critical role of macrophages in cardiometabolic diseases, understanding of human macrophage biology has been hampered by the lack of reliable and scalable models for cellular and genetic studies. Human induced pluripotent stem cell (iPSC)-derived macrophages (IPSDM), as an unlimited source of subject genotype-specific cells, will undoubtedly play an important role in advancing our understanding of the role of macrophages in human diseases. In this review, we summarize current literature in the differentiation and characterization of IPSDM at phenotypic, functional, and transcriptomic levels. We emphasize the progress in differentiating iPSC to tissue resident macrophages, and in understanding the ontogeny of in vitro differentiated IPSDM that resembles primitive hematopoiesis, rather than adult definitive hematopoiesis. We review the application of IPSDM in modeling both Mendelian genetic disorders and host-pathogen interactions. Finally, we highlighted the potential areas of research using IPSDM in functional validation of coronary artery disease loci in genome-wide association studies, functional genomic analyses, drug testing, and cell therapeutics in cardiovascular diseases. © 2017 American Heart Association, Inc.
Erbe, C
2000-07-01
This article examines the masking by anthropogenic noise of beluga whale calls. Results from human masking experiments and a software backpropagation neural network are compared to the performance of a trained beluga whale. The goal was to find an accurate, reliable, and fast model to replace lengthy and expensive animal experiments. A beluga call was masked by three types of noise, an icebreaker's bubbler system and propeller noise, and ambient arctic ice-cracking noise. Both the human experiment and the neural network successfully modeled the beluga data in the sense that they classified the noises in the same order from strongest to weakest masking as the whale and with similar call-detection thresholds. The neural network slightly outperformed the humans. Both models were then used to predict the masking of a fourth type of noise, Gaussian white noise. Their prediction ability was judged by returning to the aquarium to measure masked-hearing thresholds of a beluga in white noise. Both models and the whale identified bubbler noise as the strongest masker, followed by ramming, then white noise. Natural ice-cracking noise masked the least. However, the humans and the neural network slightly overpredicted the amount of masking for white noise. This is neglecting individual variation in belugas, because only one animal could be trained. Comparing the human model to the neural network model, the latter has the advantage of objectivity, reproducibility of results, and efficiency, particularly if the interference of a large number of signals and noise is to be examined.
Reliability models: the influence of model specification in generation expansion planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1982-10-01
This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Do domestic dogs learn words based on humans' referential behaviour?
Tempelmann, Sebastian; Kaminski, Juliane; Tomasello, Michael
2014-01-01
Some domestic dogs learn to comprehend human words, although the nature and basis of this learning is unknown. In the studies presented here we investigated whether dogs learn words through an understanding of referential actions by humans rather than simple association. In three studies, each modelled on a study conducted with human infants, we confronted four word-experienced dogs with situations involving no spatial-temporal contiguity between the word and the referent; the only available cues were referential actions displaced in time from exposure to their referents. We found that no dogs were able to reliably link an object with a label based on social-pragmatic cues alone in all the tests. However, one dog did show skills in some tests, possibly indicating an ability to learn based on social-pragmatic cues.
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
Fifty Years of THERP and Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring
2012-06-01
In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø Nationalmore » Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.« less
Object Extraction in Cluttered Environments via a P300-Based IFCE
He, Huidong; Xian, Bin; Zeng, Ming; Zhou, Huihui; Niu, Linwei; Chen, Genshe
2017-01-01
One of the fundamental issues for robot navigation is to extract an object of interest from an image. The biggest challenges for extracting objects of interest are how to use a machine to model the objects in which a human is interested and extract them quickly and reliably under varying illumination conditions. This article develops a novel method for segmenting an object of interest in a cluttered environment by combining a P300-based brain computer interface (BCI) and an improved fuzzy color extractor (IFCE). The induced P300 potential identifies the corresponding region of interest and obtains the target of interest for the IFCE. The classification results not only represent the human mind but also deliver the associated seed pixel and fuzzy parameters to extract the specific objects in which the human is interested. Then, the IFCE is used to extract the corresponding objects. The results show that the IFCE delivers better performance than the BP network or the traditional FCE. The use of a P300-based IFCE provides a reliable solution for assisting a computer in identifying an object of interest within images taken under varying illumination intensities. PMID:28740505
10 CFR 712.21 - Office of Hearings and Appeals.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Office of Hearings and Appeals. 712.21 Section 712.21 Energy DEPARTMENT OF ENERGY HUMAN RELIABILITY PROGRAM Establishment of and Procedures for the Human Reliability Program Procedures § 712.21 Office of Hearings and Appeals. (a) The certification review hearing...
Olleros, Maria L; Chavez-Galan, Leslie; Segueni, Noria; Bourigault, Marie L; Vesin, Dominique; Kruglov, Andrey A; Drutskaya, Marina S; Bisig, Ruth; Ehlers, Stefan; Aly, Sahar; Walter, Kerstin; Kuprash, Dmitry V; Chouchkova, Miliana; Kozlov, Sergei V; Erard, François; Ryffel, Bernard; Quesniaux, Valérie F J; Nedospasov, Sergei A; Garcia, Irene
2015-09-01
Tumor necrosis factor (TNF) is an important cytokine for host defense against pathogens but is also associated with the development of human immunopathologies. TNF blockade effectively ameliorates many chronic inflammatory conditions but compromises host immunity to tuberculosis. The search for novel, more specific human TNF blockers requires the development of a reliable animal model. We used a novel mouse model with complete replacement of the mouse TNF gene by its human ortholog (human TNF [huTNF] knock-in [KI] mice) to determine resistance to Mycobacterium bovis BCG and M. tuberculosis infections and to investigate whether TNF inhibitors in clinical use reduce host immunity. Our results show that macrophages from huTNF KI mice responded to BCG and lipopolysaccharide similarly to wild-type macrophages by NF-κB activation and cytokine production. While TNF-deficient mice rapidly succumbed to mycobacterial infection, huTNF KI mice survived, controlling the bacterial burden and activating bactericidal mechanisms. Administration of TNF-neutralizing biologics disrupted the control of mycobacterial infection in huTNF KI mice, leading to an increased bacterial burden and hyperinflammation. Thus, our findings demonstrate that human TNF can functionally replace murine TNF in vivo, providing mycobacterial resistance that could be compromised by TNF neutralization. This new animal model will be helpful for the testing of specific biologics neutralizing human TNF. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Human reliability in petrochemical industry: an action research.
Silva, João Alexandre Pinheiro; Camarotto, João Alberto
2012-01-01
This paper aims to identify conflicts and gaps between the operators' strategies and actions and the organizational managerial approach for human reliability. In order to achieve these goals, the research approach adopted encompasses literature review, mixing action research methodology and Ergonomic Workplace Analysis in field research. The result suggests that the studied company has a classical and mechanistic point of view focusing on error identification and building barriers through procedures, checklists and other prescription alternatives to improve performance in reliability area. However, it was evident the fundamental role of the worker as an agent of maintenance and construction of system reliability during the action research cycle.
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less
Chang, Pyung Hun; Kang, Sang Hoon
2010-05-30
The basic assumption of stochastic human arm impedance estimation methods is that the human arm and robot behave linearly for small perturbations. In the present work, we have identified the degree of influence of nonlinear friction in robot joints to the stochastic human arm impedance estimation. Internal model based impedance control (IMBIC) is then proposed as a means to make the estimation accurate by compensating for the nonlinear friction. From simulations with a nonlinear Lugre friction model, it is observed that the reliability and accuracy of the estimation are severely degraded with nonlinear friction: below 2 Hz, multiple and partial coherence functions are far less than unity; estimated magnitudes and phases are severely deviated from that of a real human arm throughout the frequency range of interest; and the accuracy is not enhanced with an increase of magnitude of the force perturbations. In contrast, the combined use of stochastic estimation and IMBIC provides with accurate estimation results even with large friction: the multiple coherence functions are larger than 0.9 throughout the frequency range of interest and the estimated magnitudes and phases are well matched with that of a real human arm. Furthermore, the performance of suggested method is independent of human arm and robot posture, and human arm impedance. Therefore, the IMBIC will be useful in measuring human arm impedance with conventional robot, as well as in designing a spatial impedance measuring robot, which requires gearing. (c) 2010 Elsevier B.V. All rights reserved.
Combining human and machine processes (CHAMP)
NASA Astrophysics Data System (ADS)
Sudit, Moises; Sudit, David; Hirsch, Michael
2015-05-01
Machine Reasoning and Intelligence is usually done in a vacuum, without consultation of the ultimate decision-maker. The late consideration of the human cognitive process causes some major problems in the use of automated systems to provide reliable and actionable information that users can trust and depend to make the best Course-of-Action (COA). On the other hand, if automated systems are created exclusively based on human cognition, then there is a danger of developing systems that don't push the barrier of technology and are mainly done for the comfort level of selected subject matter experts (SMEs). Our approach to combining human and machine processes (CHAMP) is based on the notion of developing optimal strategies for where, when, how, and which human intelligence should be injected within a machine reasoning and intelligence process. This combination is based on the criteria of improving the quality of the output of the automated process while maintaining the required computational efficiency for a COA to be actuated in timely fashion. This research addresses the following problem areas: • Providing consistency within a mission: Injection of human reasoning and intelligence within the reliability and temporal needs of a mission to attain situational awareness, impact assessment, and COA development. • Supporting the incorporation of data that is uncertain, incomplete, imprecise and contradictory (UIIC): Development of mathematical models to suggest the insertion of a cognitive process within a machine reasoning and intelligent system so as to minimize UIIC concerns. • Developing systems that include humans in the loop whose performance can be analyzed and understood to provide feedback to the sensors.
The Syrian hamster model of hantavirus pulmonary syndrome
Safronetz, David; Ebihara, Hideki; Feldmann, Heinz; Hooper, Jay W.
2012-01-01
Hantavirus pulmonary syndrome (HPS) is a relatively rare, but frequently fatal disease associated with New World hantaviruses, most commonly Sin Nombre and Andes viruses in North and South America, respectively. It is characterized by fever and the sudden, rapid onset of severe respiratory distress and cardiogenic shock, which can be fatal in up to 50% of cases. Currently there are no approved antiviral therapies or vaccines for the treatment or prevention of HPS. A major obstacle in the development of effective medical countermeasures against highly pathogenic agents like the hantaviruses is recapitulating the human disease as closely as possible in an appropriate and reliable animal model. To date, the only animal model that resembles HPS in humans is the Syrian hamster model. Following infection with Andes virus, hamsters develop HPS-like disease which faithfully mimics the human condition with respect to incubation period and pathophysiology of disease. Perhaps most importantly, the sudden and rapid onset of severe respiratory distress observed in humans also occurs in hamsters. The last several years has seen an increase in studies utilizing the Andes virus hamster model which have provided unique insight into HPS pathogenesis as well as potential therapeutic and vaccine strategies to treat and prevent HPS. The purpose of this article is to review the current understanding of HPS disease progression in Syrian hamsters and discuss the suitability of utilizing this model to evaluate potential medical countermeasures against HPS. PMID:22705798
Evaluating sustainable energy harvesting systems for human implantable sensors
NASA Astrophysics Data System (ADS)
AL-Oqla, Faris M.; Omar, Amjad A.; Fares, Osama
2018-03-01
Achieving most appropriate energy-harvesting technique for human implantable sensors is still challenging for the industry where keen decisions have to be performed. Moreover, the available polymeric-based composite materials are offering plentiful renewable applications that can help sustainable development as being useful for the energy-harvesting systems such as photovoltaic, piezoelectric, thermoelectric devices as well as other energy storage systems. This work presents an expert-based model capable of better evaluating and examining various available renewable energy-harvesting techniques in urban surroundings subject to various technical and economic, often conflicting, criteria. Wide evaluation criteria have been adopted in the proposed model after examining their suitability as well as ensuring the expediency and reliability of the model by worldwide experts' feedback. The model includes establishing an analytic hierarchy structure with simultaneous 12 conflicting factors to establish a systematic road map for designers to better assess such techniques for human implantable medical sensors. The energy-harvesting techniques considered were limited to Wireless, Thermoelectric, Infrared Radiator, Piezoelectric, Magnetic Induction and Electrostatic Energy Harvesters. Results have demonstrated that the best decision was in favour of wireless-harvesting technology for the medical sensors as it is preferable by most of the considered evaluation criteria in the model.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
Site-specific to local-scale shallow landslides triggering zones assessment using TRIGRS
NASA Astrophysics Data System (ADS)
Bordoni, M.; Meisina, C.; Valentino, R.; Bittelli, M.; Chersich, S.
2015-05-01
Rainfall-induced shallow landslides are common phenomena in many parts of the world, affecting cultivation and infrastructure and sometimes causing human losses. Assessing the triggering zones of shallow landslides is fundamental for land planning at different scales. This work defines a reliable methodology to extend a slope stability analysis from the site-specific to local scale by using a well-established physically based model (TRIGRS-unsaturated). The model is initially applied to a sample slope and then to the surrounding 13.4 km2 area in Oltrepo Pavese (northern Italy). To obtain more reliable input data for the model, long-term hydro-meteorological monitoring has been carried out at the sample slope, which has been assumed to be representative of the study area. Field measurements identified the triggering mechanism of shallow failures and were used to verify the reliability of the model to obtain pore water pressure trends consistent with those measured during the monitoring activity. In this way, more reliable trends have been modelled for past landslide events, such as the April 2009 event that was assumed as a benchmark. The assessment of shallow landslide triggering zones obtained using TRIGRS-unsaturated for the benchmark event appears good for both the monitored slope and the whole study area, with better results when a pedological instead of geological zoning is considered at the regional scale. The sensitivity analyses of the influence of the soil input data show that the mean values of the soil properties give the best results in terms of the ratio between the true positive and false positive rates. The scheme followed in this work allows us to obtain better results in the assessment of shallow landslide triggering areas in terms of the reduction in the overestimation of unstable zones with respect to other distributed models applied in the past.
Differentiation of the SH-SY5Y Human Neuroblastoma Cell Line
Shipley, Mackenzie M.; Mangold, Colleen A.; Szpara, Moriah L.
2016-01-01
Having appropriate in vivo and in vitro systems that provide translational models for human disease is an integral aspect of research in neurobiology and the neurosciences. Traditional in vitro experimental models used in neurobiology include primary neuronal cultures from rats and mice, neuroblastoma cell lines including rat B35 and mouse Neuro-2A cells, rat PC12 cells, and short-term slice cultures. While many researchers rely on these models, they lack a human component and observed experimental effects could be exclusive to the respective species and may not occur identically in humans. Additionally, although these cells are neurons, they may have unstable karyotypes, making their use problematic for studies of gene expression and reproducible studies of cell signaling. It is therefore important to develop more consistent models of human neurological disease. The following procedure describes an easy-to-follow, reproducible method to obtain homogenous and viable human neuronal cultures, by differentiating the chromosomally stable human neuroblastoma cell line, SH-SY5Y. This method integrates several previously described methods1-4 and is based on sequential removal of serum from media. The timeline includes gradual serum-starvation, with introduction of extracellular matrix proteins and neurotrophic factors. This allows neurons to differentiate, while epithelial cells are selected against, resulting in a homogeneous neuronal culture. Representative results demonstrate the successful differentiation of SH-SY5Y neuroblastoma cells from an initial epithelial-like cell phenotype into a more expansive and branched neuronal phenotype. This protocol offers a reliable way to generate homogeneous populations of neuronal cultures that can be used for subsequent biochemical and molecular analyses, which provides researchers with a more accurate translational model of human infection and disease. PMID:26967710
Differentiation of the SH-SY5Y Human Neuroblastoma Cell Line.
Shipley, Mackenzie M; Mangold, Colleen A; Szpara, Moriah L
2016-02-17
Having appropriate in vivo and in vitro systems that provide translational models for human disease is an integral aspect of research in neurobiology and the neurosciences. Traditional in vitro experimental models used in neurobiology include primary neuronal cultures from rats and mice, neuroblastoma cell lines including rat B35 and mouse Neuro-2A cells, rat PC12 cells, and short-term slice cultures. While many researchers rely on these models, they lack a human component and observed experimental effects could be exclusive to the respective species and may not occur identically in humans. Additionally, although these cells are neurons, they may have unstable karyotypes, making their use problematic for studies of gene expression and reproducible studies of cell signaling. It is therefore important to develop more consistent models of human neurological disease. The following procedure describes an easy-to-follow, reproducible method to obtain homogenous and viable human neuronal cultures, by differentiating the chromosomally stable human neuroblastoma cell line, SH-SY5Y. This method integrates several previously described methods(1-4) and is based on sequential removal of serum from media. The timeline includes gradual serum-starvation, with introduction of extracellular matrix proteins and neurotrophic factors. This allows neurons to differentiate, while epithelial cells are selected against, resulting in a homogeneous neuronal culture. Representative results demonstrate the successful differentiation of SH-SY5Y neuroblastoma cells from an initial epithelial-like cell phenotype into a more expansive and branched neuronal phenotype. This protocol offers a reliable way to generate homogeneous populations of neuronal cultures that can be used for subsequent biochemical and molecular analyses, which provides researchers with a more accurate translational model of human infection and disease.
Age-dependent Fourier model of the shape of the isolated ex vivo human crystalline lens.
Urs, Raksha; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie
2010-06-01
To develop an age-dependent mathematical model of the zero-order shape of the isolated ex vivo human crystalline lens, using one mathematical function, that can be subsequently used to facilitate the development of other models for specific purposes such as optical modeling and analytical and numerical modeling of the lens. Profiles of whole isolated human lenses (n=30) aged 20-69, were measured from shadow-photogrammetric images. The profiles were fit to a 10th-order Fourier series consisting of cosine functions in polar-co-ordinate system that included terms for tilt and decentration. The profiles were corrected using these terms and processed in two ways. In the first, each lens was fit to a 10th-order Fourier series to obtain thickness and diameter, while in the second, all lenses were simultaneously fit to a Fourier series equation that explicitly include linear terms for age to develop an age-dependent mathematical model for the whole lens shape. Thickness and diameter obtained from Fourier series fits exhibited high correlation with manual measurements made from shadow-photogrammetric images. The root-mean-squared-error of the age-dependent fit was 205 microm. The age-dependent equations provide a reliable lens model for ages 20-60 years. The contour of the whole human crystalline lens can be modeled with a Fourier series. Shape obtained from the age-dependent model described in this paper can be used to facilitate the development of other models for specific purposes such as optical modeling and analytical and numerical modeling of the lens. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Integrating Reliability Analysis with a Performance Tool
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael
1995-01-01
A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.
NASA Astrophysics Data System (ADS)
Chen, Kun; Fu, Xing; Dorantes-Gonzalez, Dante J.; Lu, Zimo; Li, Tingting; Li, Yanning; Wu, Sen; Hu, Xiaotang
2014-07-01
Air pollution has been correlated to an increasing number of cases of human skin diseases in recent years. However, the investigation of human skin tissues has received only limited attention, to the point that there are not yet satisfactory modern detection technologies to accurately, noninvasively, and rapidly diagnose human skin at epidermis and dermis levels. In order to detect and analyze severe skin diseases such as melanoma, a finite element method (FEM) simulation study of the application of the laser-generated surface acoustic wave (LSAW) technique is developed. A three-layer human skin model is built, where LSAW's are generated and propagated, and their effects in the skin medium with melanoma are analyzed. Frequency domain analysis is used as a main tool to investigate such issues as minimum detectable size of melanoma, filtering spectra from noise and from computational irregularities, as well as on how the FEM model meshing size and computational capabilities influence the accuracy of the results. Based on the aforementioned aspects, the analysis of the signals under the scrutiny of the phase velocity dispersion curve is verified to be a reliable, a sensitive, and a promising approach for detecting and characterizing melanoma in human skin.
Chen, Kun; Fu, Xing; Dorantes-Gonzalez, Dante J; Lu, Zimo; Li, Tingting; Li, Yanning; Wu, Sen; Hu, Xiaotang
2014-01-01
Air pollution has been correlated to an increasing number of cases of human skin diseases in recent years. However, the investigation of human skin tissues has received only limited attention, to the point that there are not yet satisfactory modern detection technologies to accurately, noninvasively, and rapidly diagnose human skin at epidermis and dermis levels. In order to detect and analyze severe skin diseases such as melanoma, a finite element method (FEM) simulation study of the application of the laser-generated surface acoustic wave (LSAW) technique is developed. A three-layer human skin model is built, where LSAW’s are generated and propagated, and their effects in the skin medium with melanoma are analyzed. Frequency domain analysis is used as a main tool to investigate such issues as minimum detectable size of melanoma, filtering spectra from noise and from computational irregularities, as well as on how the FEM model meshing size and computational capabilities influence the accuracy of the results. Based on the aforementioned aspects, the analysis of the signals under the scrutiny of the phase velocity dispersion curve is verified to be a reliable, a sensitive, and a promising approach for detecting and characterizing melanoma in human skin.
Animal models of toxicology testing: the role of pigs.
Helke, Kristi L; Swindle, Marvin Michael
2013-02-01
In regulatory toxicological testing, both a rodent and non-rodent species are required. Historically, dogs and non-human primates (NHP) have been the species of choice of the non-rodent portion of testing. The pig is an appropriate option for these tests based on metabolic pathways utilized in xenobiotic biotransformation. This review focuses on the Phase I and Phase II biotransformation pathways in humans and pigs and highlights the similarities and differences of these models. This is a growing field and references are sparse. Numerous breeds of pigs are discussed along with specific breed differences in these enzymes that are known. While much available data are presented, it is grossly incomplete and sometimes contradictory based on methods used. There is no ideal species to use in toxicology. The use of dogs and NHP in xenobiotic testing continues to be the norm. Pigs present a viable and perhaps more reliable model of non-rodent testing.
Ji, Jie; Hedelin, Anna; Malmlöf, Maria; Kessler, Vadim; Seisenbaeva, Gulaim; Gerde, Per; Palmberg, Lena
2017-01-01
Exposure to agents via inhalation is of great concerns both in workplace environment and in the daily contact with particles in the ambient air. Reliable human airway exposure systems will most likely replace animal experiment in future toxicity assessment studies of inhaled agents. In this study, we successfully established a combination of an exposure system (XposeALI) with 3D models mimicking both healthy and chronic bronchitis-like mucosa by co-culturing human primary bronchial epithelial cells (PBEC) and fibroblast at air-liquid interface (ALI). Light-, confocal microscopy, scanning- and transmission electron microscopy, transepithelial electrical resistance (TEER) measurement and RT-PCR were performed to identify how the PBEC differentiated under ALI culture condition. Both models were exposed to palladium (Pd) nanoparticles which sized 6-10 nm, analogous to those released from modern car catalysts, at three different concentrations utilizing the XposeALI module of the PreciseInhale® exposure system. Exposing the 3D models to Pd nanoparticles induced increased secretion of IL-8, yet the chronic bronchitis-like model released significantly more IL-8 than the normal model. The levels of IL-8 in basal medium (BM) and apical lavage medium (AM) were in the same ranges, but the secretion of MMP-9 was significantly higher in the AM compared to the BM. This combination of relevant human bronchial mucosa models and sophisticated exposure system can mimic in vivo conditions and serve as a useful alternative animal testing tool when studying adverse effects in humans exposed to aerosols, air pollutants or particles in an occupational setting.
Lin, Dexin; Wu, Xianbin; Ji, Xiaoke; Zhang, Qiyu; Lin, YuanWei; Chen, WeiJian; Jin, Wangxun; Deng, Liming; Chen, Yunzhi; Chen, Bicheng; Li, Jianmin
2012-01-01
Current large animal models that could closely resemble the typical features of cirrhotic portal hypertension in human have not been well established. Thus, we aimed to develop and describe a reliable and reproducible canine cirrhosis model of portal hypertension. A total of 30 mongrel dogs were randomly divided into four groups: 1 (control; n = 5), 2 (portal vein stenosis [PVS]; n = 5], 3 (thioacetamide [TAA]; n = 5), and 4 (PVS plus TAA; n = 15). After 4-months modeling period, liver and spleen CT perfusion, abdominal CT scans, portal hemodynamics, gastroscopy, hepatic function, blood routine, the bone marrow, liver, and spleen histology were studied. The animals in group 2 (PVS) developed extrahepatic portosystemic collateral circulation, particularly esophageal varices, without hepatic cirrhosis and portal hypertension. Animals from group 3 (TAA) presented mild cirrhosis and portal hypertension without significant symptoms of esophageal varices and hypersplenism. In contrast, animals from group 4 (PVS + TAA) showed well-developed micronodular and macronodular cirrhosis, associated with significant portal hypertension and hypersplenism. The combination of PVS and TAA represents a novel, reliable, and reproducible canine cirrhosis model of portal hypertension, which is associated with the typical characteristics of portal hypertension, including hypersplenism.
Gude, J.A.; Mitchell, M.S.; Russell, R.E.; Sime, C.A.; Bangs, E.E.; Mech, L.D.; Ream, R.R.
2012-01-01
Reliable analyses can help wildlife managers make good decisions, which are particularly critical for controversial decisions such as wolf (Canis lupus) harvest. Creel and Rotella (2010) recently predicted substantial population declines in Montana wolf populations due to harvest, in contrast to predictions made by Montana Fish, Wildlife and Parks (MFWP). We replicated their analyses considering only those years in which field monitoring was consistent, and we considered the effect of annual variation in recruitment on wolf population growth. Rather than assuming constant rates, we used model selection methods to evaluate and incorporate models of factors driving recruitment and human-caused mortality rates in wolf populations in the Northern Rocky Mountains. Using data from 27 area-years of intensive wolf monitoring, we show that variation in both recruitment and human-caused mortality affect annual wolf population growth rates and that human-caused mortality rates have increased with the sizes of wolf populations. We document that recruitment rates have decreased over time, and we speculate that rates have decreased with increasing population sizes and/or that the ability of current field resources to document recruitment rates has recently become less successful as the number of wolves in the region has increased. Estimates of positive wolf population growth in Montana from our top models are consistent with field observations and estimates previously made by MFWP for 2008-2010, whereas the predictions for declining wolf populations of Creel and Rotella (2010) are not. Familiarity with limitations of raw data, obtained first-hand or through consultation with scientists who collected the data, helps generate more reliable inferences and conclusions in analyses of publicly available datasets. Additionally, development of efficient monitoring methods for wolves is a pressing need, so that analyses such as ours will be possible in future years when fewer resources will be available for monitoring. ?? 2011 The Wildlife Society. Copyright ?? The Wildlife Society, 2011.
Andersson, Tommy B
2017-10-01
The pharmaceutical industry urgently needs reliable pre-clinical models to evaluate the efficacy and safety of new chemical entities before they enter the clinical trials. Development of in vitro model systems that emulate the functions of the human liver organ has been an elusive task. Cell lines exhibit a low drug-metabolizing capacity and primary liver cells rapidly dedifferentiate in culture, which restrict their usefulness substantially. Recently, the development of hepatocyte spheroid cultures has shown promising results. The proteome and transcriptome in the spheroids were similar to the liver tissue, and hepatotoxicity of selected substances was detected at in vivo-relevant concentrations. © 2017 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Supervised interpretation of echocardiograms with a psychological model of expert supervision
NASA Astrophysics Data System (ADS)
Revankar, Shriram V.; Sher, David B.; Shalin, Valerie L.; Ramamurthy, Maya
1993-07-01
We have developed a collaborative scheme that facilitates active human supervision of the binary segmentation of an echocardiogram. The scheme complements the reliability of a human expert with the precision of segmentation algorithms. In the developed system, an expert user compares the computer generated segmentation with the original image in a user friendly graphics environment, and interactively indicates the incorrectly classified regions either by pointing or by circling. The precise boundaries of the indicated regions are computed by studying original image properties at that region, and a human visual attention distribution map obtained from the published psychological and psychophysical research. We use the developed system to extract contours of heart chambers from a sequence of two dimensional echocardiograms. We are currently extending this method to incorporate a richer set of inputs from the human supervisor, to facilitate multi-classification of image regions depending on their functionality. We are integrating into our system the knowledge related constraints that cardiologists use, to improve the capabilities of our existing system. This extension involves developing a psychological model of expert reasoning, functional and relational models of typical views in echocardiograms, and corresponding interface modifications to map the suggested actions to image processing algorithms.
Bayesian model selection validates a biokinetic model for zirconium processing in humans
2012-01-01
Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152
Reliability of reflectance measures in passive filters
NASA Astrophysics Data System (ADS)
Saldiva de André, Carmen Diva; Afonso de André, Paulo; Rocha, Francisco Marcelo; Saldiva, Paulo Hilário Nascimento; Carvalho de Oliveira, Regiani; Singer, Julio M.
2014-08-01
Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. The intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. The results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is km = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding km = 0.56 for these occasions and km = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion.
Applications of Human Performance Reliability Evaluation Concepts and Demonstration Guidelines
1977-03-15
ship stops dead in the water and the AN/SQS-26 operator recommends a new heading (000°). At T + 14 minutes, the target ship begins a hard turn to...Various Simulated Conditions 82 9 Hunan Reliability for Each Simulated Operator (Baseline Run) 83 10 Human and Equipment Availabilit / under
Trade-associated pathways of alien forest insect entries in Canada
Denys Yemshanov; Frank H. Koch; Mark Ducey; Klaus Koehler
2012-01-01
Long-distance introductions of new invasive species have often been driven by socioeconomic factors, such that traditional ââbiologicalââ invasion models may not be capable of estimating spread fully and reliably. In this study we present a new methodology to characterize and predict pathways of human-assisted entries of alien forest insects. We have developed a...
How Haptic Size Sensations Improve Distance Perception
Battaglia, Peter W.; Kersten, Daniel; Schrater, Paul R.
2011-01-01
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information. PMID:21738457
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-03-23
We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.
Utility of Survival Motor Neuron ELISA for Spinal Muscular Atrophy Clinical and Preclinical Analyses
Kobayashi, Dione T.; Olson, Rory J.; Sly, Laurel; Swanson, Chad J.; Chung, Brett; Naryshkin, Nikolai; Narasimhan, Jana; Bhattacharyya, Anuradha; Mullenix, Michael; Chen, Karen S.
2011-01-01
Objectives Genetic defects leading to the reduction of the survival motor neuron protein (SMN) are a causal factor for Spinal Muscular Atrophy (SMA). While there are a number of therapies under evaluation as potential treatments for SMA, there is a critical lack of a biomarker method for assessing efficacy of therapeutic interventions, particularly those targeting upregulation of SMN protein levels. Towards this end we have engaged in developing an immunoassay capable of accurately measuring SMN protein levels in blood, specifically in peripheral blood mononuclear cells (PBMCs), as a tool for validating SMN protein as a biomarker in SMA. Methods A sandwich enzyme-linked immunosorbent assay (ELISA) was developed and validated for measuring SMN protein in human PBMCs and other cell lysates. Protocols for detection and extraction of SMN from transgenic SMA mouse tissues were also developed. Results The assay sensitivity for human SMN is 50 pg/mL. Initial analysis reveals that PBMCs yield enough SMN to analyze from blood volumes of less than 1 mL, and SMA Type I patients' PBMCs show ∼90% reduction of SMN protein compared to normal adults. The ELISA can reliably quantify SMN protein in human and mouse PBMCs and muscle, as well as brain, and spinal cord from a mouse model of severe SMA. Conclusions This SMN ELISA assay enables the reliable, quantitative and rapid measurement of SMN in healthy human and SMA patient PBMCs, muscle and fibroblasts. SMN was also detected in several tissues in a mouse model of SMA, as well as in wildtype mouse tissues. This SMN ELISA has general translational applicability to both preclinical and clinical research efforts. PMID:21904622
Rouse, Andrew A; Cook, Peter F; Large, Edward W; Reichmuth, Colleen
2016-01-01
Human capacity for entraining movement to external rhythms-i.e., beat keeping-is ubiquitous, but its evolutionary history and neural underpinnings remain a mystery. Recent findings of entrainment to simple and complex rhythms in non-human animals pave the way for a novel comparative approach to assess the origins and mechanisms of rhythmic behavior. The most reliable non-human beat keeper to date is a California sea lion, Ronan, who was trained to match head movements to isochronous repeating stimuli and showed spontaneous generalization of this ability to novel tempos and to the complex rhythms of music. Does Ronan's performance rely on the same neural mechanisms as human rhythmic behavior? In the current study, we presented Ronan with simple rhythmic stimuli at novel tempos. On some trials, we introduced "perturbations," altering either tempo or phase in the middle of a presentation. Ronan quickly adjusted her behavior following all perturbations, recovering her consistent phase and tempo relationships to the stimulus within a few beats. Ronan's performance was consistent with predictions of mathematical models describing coupled oscillation: a model relying solely on phase coupling strongly matched her behavior, and the model was further improved with the addition of period coupling. These findings are the clearest evidence yet for parity in human and non-human beat keeping and support the view that the human ability to perceive and move in time to rhythm may be rooted in broadly conserved neural mechanisms.
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)
1998-01-01
This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.
NASA Technical Reports Server (NTRS)
Trejo, L. J.; Shensa, M. J.
1999-01-01
This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.
Incorporating human-water dynamics in a hyper-resolution land surface model
NASA Astrophysics Data System (ADS)
Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.
2017-12-01
The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in a hyper-resolution LSM this work allows for progress on hydrological monitoring and predictions, as well as drought preparedness and water impact assessments at relevant decision-making scales.
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.
Integrative analyses of human reprogramming reveal dynamic nature of induced pluripotency
Cacchiarelli, Davide; Trapnell, Cole; Ziller, Michael J.; Soumillon, Magali; Cesana, Marcella; Karnik, Rahul; Donaghey, Julie; Smith, Zachary D.; Ratanasirintrawoot, Sutheera; Zhang, Xiaolan; Ho Sui, Shannan J.; Wu, Zhaoting; Akopian, Veronika; Gifford, Casey A.; Doench, John; Rinn, John L.; Daley, George Q.; Meissner, Alexander; Lander, Eric S.; Mikkelsen, Tarjei S.
2015-01-01
Summary Induced pluripotency is a promising avenue for disease modeling and therapy, but the molecular principles underlying this process, particularly in human cells, remain poorly understood due to donor-to-donor variability and intercellular heterogeneity. Here we constructed and characterized a clonal, inducible human reprogramming system that provides a reliable source of cells at any stage of the process. This system enabled integrative transcriptional and epigenomic analysis across the human reprogramming timeline at high resolution. We observed distinct waves of gene network activation, including the ordered reactivation of broad developmental regulators followed by early embryonic patterning genes and culminating in the emergence of a signature reminiscent of pre-implantation stages. Moreover, complementary functional analyses allowed us to identify and validate novel regulators of the reprogramming process. Altogether, this study sheds light on the molecular underpinnings of induced pluripotency in human cells and provides a robust cell platform for further studies. PMID:26186193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
The isolated perfused human skin flap model: A missing link in skin penetration studies?
Ternullo, Selenia; de Weerd, Louis; Flaten, Gøril Eide; Holsæter, Ann Mari; Škalko-Basnet, Nataša
2017-01-01
Development of effective (trans)dermal drug delivery systems requires reliable skin models to evaluate skin drug penetration. The isolated perfused human skin flap remains metabolically active tissue for up to 6h during in vitro perfusion. We introduce the isolated perfused human skin flap as a close-to-in vivo skin penetration model. To validate the model's ability to evaluate skin drug penetration the solutions of a hydrophilic (calcein) and a lipophilic (rhodamine) fluorescence marker were applied. The skin flaps were perfused with modified Krebs-Henseleit buffer (pH7.4). Infrared technology was used to monitor perfusion and to select a well-perfused skin area for administration of the markers. Flap perfusion and physiological parameters were maintained constant during the 6h experiments and the amount of markers in the perfusate was determined. Calcein was detected in the perfusate, whereas rhodamine was not detectable. Confocal images of skin cross-sections shoved that calcein was uniformly distributed through the skin, whereas rhodamine accumulated in the stratum corneum. For comparison, the penetration of both markers was evaluated on ex vivo human skin, pig skin and cellophane membrane. The proposed perfused flap model enabled us to distinguish between the penetrations of the two markers and could be a promising close-to-in vivo tool in skin penetration studies and optimization of formulations destined for skin administration. Copyright © 2016 Elsevier B.V. All rights reserved.
Modelling large scale human activity in San Francisco
NASA Astrophysics Data System (ADS)
Gonzalez, Marta
2010-03-01
Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.
Modeling Reliability Growth in Accelerated Stress Testing
2013-12-01
MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING DISSERTATION Jason K. Freels Major...Defense, or the United States Government. AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING ...DISTRIBUTION UNLIMITED AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING Jason K. Freels
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
Human Factors in Financial Trading: An Analysis of Trading Incidents.
Leaver, Meghan; Reader, Tom W
2016-09-01
This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors-related issues in operational trading incidents. In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors-related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. © 2016, Human Factors and Ergonomics Society.
Astashkina, Anna; Grainger, David W
2014-04-01
Drug failure due to toxicity indicators remains among the primary reasons for staggering drug attrition rates during clinical studies and post-marketing surveillance. Broader validation and use of next-generation 3-D improved cell culture models are expected to improve predictive power and effectiveness of drug toxicological predictions. However, after decades of promising research significant gaps remain in our collective ability to extract quality human toxicity information from in vitro data using 3-D cell and tissue models. Issues, challenges and future directions for the field to improve drug assay predictive power and reliability of 3-D models are reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
Are animal models predictive for human postmortem muscle protein degradation?
Ehrenfellner, Bianca; Zissler, Angela; Steinbacher, Peter; Monticelli, Fabio C; Pittner, Stefan
2017-11-01
A most precise determination of the postmortem interval (PMI) is a crucial aspect in forensic casework. Although there are diverse approaches available to date, the high heterogeneity of cases together with the respective postmortal changes often limit the validity and sufficiency of many methods. Recently, a novel approach for time since death estimation by the analysis of postmortal changes of muscle proteins was proposed. It is however necessary to improve the reliability and accuracy, especially by analysis of possible influencing factors on protein degradation. This is ideally investigated on standardized animal models that, however, require legitimization by a comparison of human and animal tissue, and in this specific case of protein degradation profiles. Only if protein degradation events occur in comparable fashion within different species, respective findings can sufficiently be transferred from the animal model to application in humans. Therefor samples from two frequently used animal models (mouse and pig), as well as forensic cases with representative protein profiles of highly differing PMIs were analyzed. Despite physical and physiological differences between species, western blot analysis revealed similar patterns in most of the investigated proteins. Even most degradation events occurred in comparable fashion. In some other aspects, however, human and animal profiles depicted distinct differences. The results of this experimental series clearly indicate the huge importance of comparative studies, whenever animal models are considered. Although animal models could be shown to reflect the basic principles of protein degradation processes in humans, we also gained insight in the difficulties and limitations of the applicability of the developed methodology in different mammalian species regarding protein specificity and methodic functionality.
Luu-The, Van; Duche, Daniel; Ferraris, Corinne; Meunier, Jean-Roch; Leclaire, Jacques; Labrie, Fernand
2009-09-01
Episkin and full thickness model from Episkin (FTM) are human skin models obtained from in vitro growth of keratinocytes into the five typical layers of the epidermis. FTM is a full thickness reconstructed skin model that also contains fibroblasts seeded in a collagen matrix. To assess whether enzymes involved in chemical detoxification are expressed in Episkin and FTM and how their levels compare with the human epidermis, dermis and total skin. Quantification of the mRNA expression levels of phases 1 and 2 metabolizing enzymes in cultured Episkin and FTM and human epidermis, dermis and total skin using Realtime PCR. The data show that the expression profiles of 61 phases 1 and 2 metabolizing enzymes in Episkin, FTM and epidermis are generally similar, with some exceptions. Cytochrome P450-dependent enzymes and flavin monooxygenases are expressed at low levels, while phase 2 metabolizing enzymes are expressed at much higher levels, especially, glutathione-S-transferase P1 (GSTP1) catechol-O-methyl transferase (COMT), steroid sulfotransferase (SULT2B1b), and N-acetyl transferase (NAT5). The present study also identifies the presence of many enzymes involved in cholesterol, arachidonic acid, leukotriene, prostaglandin, eicosatrienoic acids, and vitamin D3 metabolisms. The present data strongly suggest that Episkin and FTM represent reliable and valuable in vitro human skin models for studying the function of phases 1 and 2 metabolizing enzymes in xenobiotic metabolisms. They could be used to replace invasive methods or laboratory animals for skin experiments.
Effect of bulk modulus on deformation of the brain under rotational accelerations
NASA Astrophysics Data System (ADS)
Ganpule, S.; Daphalapurkar, N. P.; Cetingul, M. P.; Ramesh, K. T.
2018-01-01
Traumatic brain injury such as that developed as a consequence of blast is a complex injury with a broad range of symptoms and disabilities. Computational models of brain biomechanics hold promise for illuminating the mechanics of traumatic brain injury and for developing preventive devices. However, reliable material parameters are needed for models to be predictive. Unfortunately, the properties of human brain tissue are difficult to measure, and the bulk modulus of brain tissue in particular is not well characterized. Thus, a wide range of bulk modulus values are used in computational models of brain biomechanics, spanning up to three orders of magnitude in the differences between values. However, the sensitivity of these variations on computational predictions is not known. In this work, we study the sensitivity of a 3D computational human head model to various bulk modulus values. A subject-specific human head model was constructed from T1-weighted MRI images at 2-mm3 voxel resolution. Diffusion tensor imaging provided data on spatial distribution and orientation of axonal fiber bundles for modeling white matter anisotropy. Non-injurious, full-field brain deformations in a human volunteer were used to assess the simulated predictions. The comparison suggests that a bulk modulus value on the order of GPa gives the best agreement with experimentally measured in vivo deformations in the human brain. Further, simulations of injurious loading suggest that bulk modulus values on the order of GPa provide the closest match with the clinical findings in terms of predicated injured regions and extent of injury.
Bernerd, Francoise; Marionnet, Claire; Duval, Christine
2012-06-01
Cutaneous damages such as sunburn, pigmentation, and photoaging are known to be induced by acute as well as repetitive sun exposure. Not only for basic research, but also for the design of the most efficient photoprotection, it is crucial to understand and identify the early biological events occurring after ultraviolet (UV) exposure. Reconstructed human skin models provide excellent and reliable in vitro tools to study the UV-induced alterations of the different skin cell types, keratinocytes, fibroblasts, and melanocytes in a dose- and time-dependent manner. Using different in vitro human skin models, the effects of UV light (UVB and UVA) were investigated. UVB-induced damages are essentially epidermal, with the typical sunburn cells and DNA lesions, whereas UVA radiation-induced damages are mostly located within the dermal compartment. Pigmentation can also be obtained after solar simulated radiation exposure of pigmented reconstructed skin model. Those models are also highly adequate to assess the potential of sunscreens to protect the skin from UV-associated damage, sunburn reaction, photoaging, and pigmentation. The results showed that an effective photoprotection is provided by broad-spectrum sunscreens with a potent absorption in both UVB and UVA ranges.
Muccino, Enrico; Porta, Davide; Magli, Francesca; Cigada, Alfredo; Sala, Remo; Gibelli, Daniele; Cattaneo, Cristina
2013-09-01
As literature is poor in functional synthetic cranial models, in this study, synthetic handmade models of cranial vaults were produced in two different materials (a urethane resin and a self-hardening foam), from multiple bone specimens (eight original cranial vaults: four human and four swine), in order to test their resemblance to bone structure in behavior, during fracture formation. All the vaults were mechanically tested with a 2-kg impact weight and filmed with a high-speed camera. Fracture patterns were homogeneous in all swine vaults and heterogeneous in human vaults, with resin fractures more similar to bone fractures. Mean fracture latency time extrapolated by videos were of 0.75 msec (bone), 1.5 msec (resin), 5.12 msec (foam) for human vaults and of 0.625 msec (bone), 1.87 msec (resin), 3.75 msec (foam) for swine vaults. These data showed that resin models are more similar to bone than foam reproductions, but that synthetic material may behave quite differently from bone as concerns fracture latency times. © 2013 American Academy of Forensic Sciences.
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
Estimate of the Reliability in Geological Forecasts for Tunnels: Toward a Structured Approach
NASA Astrophysics Data System (ADS)
Perello, Paolo
2011-11-01
In tunnelling, a reliable geological model often allows providing an effective design and facing the construction phase without unpleasant surprises. A geological model can be considered reliable when it is a valid support to correctly foresee the rock mass behaviour, therefore preventing unexpected events during the excavation. The higher the model reliability, the lower the probability of unforeseen rock mass behaviour. Unfortunately, owing to different reasons, geological models are affected by uncertainties and a fully reliable knowledge of the rock mass is, in most cases, impossible. Therefore, estimating to which degree a geological model is reliable, becomes a primary requirement in order to save time and money and to adopt the appropriate construction strategy. The definition of the geological model reliability is often achieved by engineering geologists through an unstructured analytical process and variable criteria. This paper focusses on geological models for projects of linear underground structures and represents an effort to analyse and include in a conceptual framework the factors influencing such models. An empirical parametric procedure is then developed with the aim of obtaining an index called "geological model rating (GMR)", which can be used to provide a more standardised definition of a geological model reliability.
Sellei, R M; Hingmann, S J; Kobbe, P; Weber, C; Grice, J E; Zimmerman, F; Jeromin, S; Gansslen, A; Hildebrand, F; Pape, H C
2015-01-01
PURPOSE OF THE STUDY Decision-making in treatment of an acute compartment syndrome is based on clinical assessment, supported by invasive monitoring. Thus, evolving compartment syndrome may require repeated pressure measurements. In suspected cases of potential compartment syndromes clinical assessment alone seems to be unreliable. The objective of this study was to investigate the feasibility of a non-invasive application estimating whole compartmental elasticity by ultrasound, which may improve accuracy of diagnostics. MATERIAL AND METHODS In an in-vitro model, using an artificial container simulating dimensions of the human anterior tibial compartment, intracompartmental pressures (p) were raised subsequently up to 80 mm Hg by infusion of saline solution. The compartmental depth (mm) in the cross-section view was measured before and after manual probe compression (100 mm Hg) upon the surface resulting in a linear compartmental displacement (Δd). This was repeated at rising compartmental pressures. The resulting displacements were related to the corresponding intra-compartmental pressures simulated in our model. A hypothesized relationship between pressures related compartmental displacement and the elasticity at elevated compartment pressures was investigated. RESULTS With rising compartmental pressures, a non-linear, reciprocal proportional relation between the displacement (mm) and the intra-compartmental pressure (mm Hg) occurred. The Pearson's coefficient showed a high correlation (r2 = -0.960). The intraobserver reliability value kappa resulted in a statistically high reliability (κ = 0.840). The inter-observer value indicated a fair reliability (κ = 0.640). CONCLUSIONS Our model reveals that a strong correlation between compartmental strain displacements assessed by ultrasound and the intra-compartmental pressure changes occurs. Further studies are required to prove whether this assessment is transferable to human muscle tissue. Determining the complete compartmental elasticity by ultrasound enhancement, this application may improve detection of early signs of potential compartment syndrome. Key words: compartment syndrome, intra-compartmental pressure, non-invasive diagnostic, elasticity measurement, elastography.
Sellei, Richard Martin; Hingmann, Simon Johannes; Kobbe, Philipp; Weber, Christian; Grice, John Edward; Zimmerman, Frauke; Jeromin, Sabine; Hildebrand, Frank; Pape, Hans-Christoph
2015-01-01
Decision-making in treatment of an acute compartment syndrome is based on clinical assessment, supported by invasive monitoring. Thus, evolving compartment syndrome may require repeated pressure measurements. In suspected cases of potential compartment syndromes clinical assessment alone seems to be unreliable. The objective of this study was to investigate the feasibility of a non-invasive application estimating whole compartmental elasticity by ultrasound, which may improve accuracy of diagnostics. In an in vitro model, using an artificial container simulating dimensions of the human anterior tibial compartment, intra-compartmental pressures (p) were raised subsequently up to 80 mmHg by infusion of saline solution. The compartmental depth (mm) in the cross-section view was measured before and after manual probe compression (100 mmHg) upon the surface resulting in a linear compartmental displacement (∆d). This was repeated at rising compartmental pressures. The resulting displacements were related to the corresponding intra-compartmental pressures simulated in our model. A hypothesized relationship between pressures related compartmental displacement and the elasticity at elevated compartment pressures was investigated. With rising compartmental pressures, a non-linear, reciprocal proportional relation between the displacement (mm) and the intra-compartmental pressure (mmHg) occurred. The Pearson coefficient showed a high correlation (r(2) = -0.960). The intra-observer reliability value kappa resulted in a statistically high reliability (κ = 0.840). The inter-observer value indicated a fair reliability (κ = 0.640). Our model reveals that a strong correlation between compartmental strain displacements assessed by ultrasound and the intra-compartmental pressure changes occurs. Further studies are required to prove whether this assessment is transferable to human muscle tissue. Determining the complete compartmental elasticity by ultrasound enhancement, this application may improve detection of early signs of potential compartment syndrome.
Animal models of Helicobacter-induced disease: methods to successfully infect the mouse.
Taylor, Nancy S; Fox, James G
2012-01-01
Animal models of microbial diseases in humans are an essential component for determining fulfillment of Koch's postulates and determining how the organism causes disease, host response(s), disease prevention, and treatment. In the case of Helicobacter pylori, establishing an animal model to fulfill Koch's postulates initially proved so challenging that out of frustration a human volunteer undertook an experiment to become infected with H. pylori and to monitor disease progression in order to determine if it did cause gastritis. For the discovery of the organism and his fulfillment of Koch's postulates he and a colleague were awarded the Nobel Prize in Medicine. After H. pylori was established as a gastric pathogen, it took several years before a model was developed in mice, opening the study of the organism and its pathogenicity to the general scientific community. However, while the model is widely utilized, there are a number of difficulties that can arise and need to be overcome. The purpose of this chapter is to raise awareness regarding the problems, and to offer reliable protocols for successfully establishing the H. pylori mouse model.
Hay, Justin L; Okkerse, Pieter; van Amerongen, Guido; Groeneveld, Geert Jan
2016-04-14
Human pain models are useful in the assessing the analgesic effect of drugs, providing information about a drug's pharmacology and identify potentially suitable therapeutic populations. The need to use a comprehensive battery of pain models is highlighted by studies whereby only a single pain model, thought to relate to the clinical situation, demonstrates lack of efficacy. No single experimental model can mimic the complex nature of clinical pain. The integrated, multi-modal pain task battery presented here encompasses the electrical stimulation task, pressure stimulation task, cold pressor task, the UVB inflammatory model which includes a thermal task and a paradigm for inhibitory conditioned pain modulation. These human pain models have been tested for predicative validity and reliability both in their own right and in combination, and can be used repeatedly, quickly, in short succession, with minimum burden for the subject and with a modest quantity of equipment. This allows a drug to be fully characterized and profiled for analgesic effect which is especially useful for drugs with a novel or untested mechanism of action.
Reliable generation of induced pluripotent stem cells from human lymphoblastoid cell lines.
Barrett, Robert; Ornelas, Loren; Yeager, Nicole; Mandefro, Berhan; Sahabian, Anais; Lenaeus, Lindsay; Targan, Stephan R; Svendsen, Clive N; Sareen, Dhruv
2014-12-01
Patient-specific induced pluripotent stem cells (iPSCs) hold great promise for many applications, including disease modeling to elucidate mechanisms involved in disease pathogenesis, drug screening, and ultimately regenerative medicine therapies. A frequently used starting source of cells for reprogramming has been dermal fibroblasts isolated from skin biopsies. However, numerous repositories containing lymphoblastoid cell lines (LCLs) generated from a wide array of patients also exist in abundance. To date, this rich bioresource has been severely underused for iPSC generation. We first attempted to create iPSCs from LCLs using two existing methods but were unsuccessful. Here we report a new and more reliable method for LCL reprogramming using episomal plasmids expressing pluripotency factors and p53 shRNA in combination with small molecules. The LCL-derived iPSCs (LCL-iPSCs) exhibited identical characteristics to fibroblast-derived iPSCs (fib-iPSCs), wherein they retained their genotype, exhibited a normal pluripotency profile, and readily differentiated into all three germ-layer cell types. As expected, they also maintained rearrangement of the heavy chain immunoglobulin locus. Importantly, we also show efficient iPSC generation from LCLs of patients with spinal muscular atrophy and inflammatory bowel disease. These LCL-iPSCs retained the disease mutation and could differentiate into neurons, spinal motor neurons, and intestinal organoids, all of which were virtually indistinguishable from differentiated cells derived from fib-iPSCs. This method for reliably deriving iPSCs from patient LCLs paves the way for using invaluable worldwide LCL repositories to generate new human iPSC lines, thus providing an enormous bioresource for disease modeling, drug discovery, and regenerative medicine applications. ©AlphaMed Press.
Weinberger, Dov; Bor-Shavit, Elite; Barliya, Tilda; Dahbash, Mor; Kinrot, Opher; Gaton, Dan D; Nisgav, Yael; Livnat, Tami
2017-11-01
This study aims to evaluate and standardize the reliability of a mobile laser indirect ophthalmoscope in the induction of choroidal neovascularization (CNV) in a mouse model. A diode laser indirect ophthalmoscope was used to induce CNV in pigmented male C57BL/6J mice. Standardization of spot size and laser intensity was determined using different aspheric lenses with increasing laser intensities applied around the optic disc. Development of CNV was evaluated 1, 5, and 14 days post laser application using fluorescein angiography (FA), histology, and choroidal flat mounts stained for the endothelial marker CD31 and FITC-dextran. Correlation between the number of laser hits to the number and size of developed CNV lesions was determined using flat mount choroid staining. The ability of intravitreally injected anti-human and anti-mouse VEGF antibodies to inhibit CNV induced by the mobile laser was evaluated. Laser parameters were standardized on 350 mW for 100 msec, using the 90 diopter lens to accomplish the highest incidence of Bruch's membrane rupture. CNV lesions' formation was validated on days 5 and 14 post laser injury, though FA showed leakage on as early as day 1. The number of laser hits was significantly correlated with the CNV area. CNV growth was successfully inhibited by both anti-human and mouse VEGF antibodies. The mobile laser indirect ophthalmoscope can serve as a feasible and a reliable alternative method for the CNV induction in a mouse model.
Occurrence and distribution of Indian primates
Karanth, K.K.; Nichols, J.D.; Hines, J.E.
2010-01-01
Global and regional species conservation efforts are hindered by poor distribution data and range maps. Many Indian primates face extinction, but assessments of population status are hindered by lack of reliable distribution data. We estimated the current occurrence and distribution of 15 Indian primates by applying occupancy models to field data from a country-wide survey of local experts. We modeled species occurrence in relation to ecological and social covariates (protected areas, landscape characteristics, and human influences), which we believe are critical to determining species occurrence in India. We found evidence that protected areas positively influence occurrence of seven species and for some species are their only refuge. We found evergreen forests to be more critical for some primates along with temperate and deciduous forests. Elevation negatively influenced occurrence of three species. Lower human population density was positively associated with occurrence of five species, and higher cultural tolerance was positively associated with occurrence of three species. We find that 11 primates occupy less than 15% of the total land area of India. Vulnerable primates with restricted ranges are Golden langur, Arunachal macaque, Pig-tailed macaque, stump-tailed macaque, Phayre's leaf monkey, Nilgiri langur and Lion-tailed macaque. Only Hanuman langur and rhesus macaque are widely distributed. We find occupancy modeling to be useful in determining species ranges, and in agreement with current species ranking and IUCN status. In landscapes where monitoring efforts require optimizing cost, effort and time, we used ecological and social covariates to reliably estimate species occurrence and focus species conservation efforts. ?? Elsevier Ltd.
Behavioral biometrics for verification and recognition of malicious software agents
NASA Astrophysics Data System (ADS)
Yampolskiy, Roman V.; Govindaraju, Venu
2008-04-01
Homeland security requires technologies capable of positive and reliable identification of humans for law enforcement, government, and commercial applications. As artificially intelligent agents improve in their abilities and become a part of our everyday life, the possibility of using such programs for undermining homeland security increases. Virtual assistants, shopping bots, and game playing programs are used daily by millions of people. We propose applying statistical behavior modeling techniques developed by us for recognition of humans to the identification and verification of intelligent and potentially malicious software agents. Our experimental results demonstrate feasibility of such methods for both artificial agent verification and even for recognition purposes.
The Human Brain Project and neuromorphic computing
Calimera, Andrea; Macii, Enrico; Poncino, Massimo
Summary Understanding how the brain manages billions of processing units connected via kilometers of fibers and trillions of synapses, while consuming a few tens of Watts could provide the key to a completely new category of hardware (neuromorphic computing systems). In order to achieve this, a paradigm shift for computing as a whole is needed, which will see it moving away from current “bit precise” computing models and towards new techniques that exploit the stochastic behavior of simple, reliable, very fast, low-power computing devices embedded in intensely recursive architectures. In this paper we summarize how these objectives will be pursued in the Human Brain Project. PMID:24139655
Human gait recognition by pyramid of HOG feature on silhouette images
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Park, Jeanrok; Man, Hong
2013-03-01
As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a distance without high resolution images. It has attracted much attention in recent years, especially in the fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient (pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier. Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.
ORDEM2010 and MASTER-2009 Modeled Small Debris Population Comparison
NASA Technical Reports Server (NTRS)
Krisko, Paula H.; Flegel, S.
2010-01-01
The latest versions of the two premier orbital debris engineering models, NASA s ORDEM2010 and ESA s MASTER-2009, have been publicly released. Both models have gone through significant advancements since inception, and now represent the state-of-the-art in orbital debris knowledge of their respective agencies. The purpose of these models is to provide satellite designers/operators and debris researchers with reliable estimates of the artificial debris environment in near-Earth orbit. The small debris environment within the size range of 1 mm to 1 cm is of particular interest to both human and robotic spacecraft programs. These objects are much more numerous than larger trackable debris but are still large enough to cause significant, if not catastrophic, damage to spacecraft upon impact. They are also small enough to elude routine detection by existing observation systems (radar and telescope). Without reliable detection the modeling of these populations has always coupled theoretical origins with supporting observational data in different degrees. This paper details the 1 mm to 1 cm orbital debris populations of both ORDEM2010 and MASTER-2009; their sources (both known and presumed), current supporting data and theory, and methods of population analysis. Fluxes on spacecraft for chosen orbits are also presented and discussed within the context of each model.
Braem, Maya; Asher, Lucy; Furrer, Sibylle; Lechner, Isabel; Würbel, Hanno; Melotti, Luca
2017-01-01
In humans, the personality dimension 'sensory processing sensitivity (SPS)', also referred to as "high sensitivity", involves deeper processing of sensory information, which can be associated with physiological and behavioral overarousal. However, it has not been studied up to now whether this dimension also exists in other species. SPS can influence how people perceive the environment and how this affects them, thus a similar dimension in animals would be highly relevant with respect to animal welfare. We therefore explored whether SPS translates to dogs, one of the primary model species in personality research. A 32-item questionnaire to assess the "highly sensitive dog score" (HSD-s) was developed based on the "highly sensitive person" (HSP) questionnaire. A large-scale, international online survey was conducted, including the HSD questionnaire, as well as questions on fearfulness, neuroticism, "demographic" (e.g. dog sex, age, weight; age at adoption, etc.) and "human" factors (e.g. owner age, sex, profession, communication style, etc.), and the HSP questionnaire. Data were analyzed using linear mixed effect models with forward stepwise selection to test prediction of HSD-s by the above-mentioned factors, with country of residence and dog breed treated as random effects. A total of 3647 questionnaires were fully completed. HSD-, fearfulness, neuroticism and HSP-scores showed good internal consistencies, and HSD-s only moderately correlated with fearfulness and neuroticism scores, paralleling previous findings in humans. Intra- (N = 447) and inter-rater (N = 120) reliabilities were good. Demographic and human factors, including HSP score, explained only a small amount of the variance of HSD-s. A PCA analysis identified three subtraits of SPS, comparable to human findings. Overall, the measured personality dimension in dogs showed good internal consistency, partial independence from fearfulness and neuroticism, and good intra- and inter-rater reliability, indicating good construct validity of the HSD questionnaire. Human and demographic factors only marginally affected the HSD-s suggesting that, as hypothesized for human SPS, a genetic basis may underlie this dimension within the dog species.
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
Formal Techniques for Organization Analysis: Task and Resource Management
1984-06-01
typical approach has been to base new entities on stereotypical structures and make changes as problems are recognized. Clearly, this is not an...human resources; and provide the means to change and track all 4 L I _ _ _ ____ I I these parameters as they interact with each other and respond to...functioning under internal and external change . 3. Data gathering techniques to allow one to efficiently r,’lect reliable modeling parameters from
NASA Technical Reports Server (NTRS)
Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel
2006-01-01
Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.
The Syrian hamster model of hantavirus pulmonary syndrome.
Safronetz, David; Ebihara, Hideki; Feldmann, Heinz; Hooper, Jay W
2012-09-01
Hantavirus pulmonary syndrome (HPS) is a relatively rare, but frequently fatal disease associated with New World hantaviruses, most commonly Sin Nombre and Andes viruses in North and South America, respectively. It is characterized by fever and the sudden, rapid onset of severe respiratory distress and cardiogenic shock, which can be fatal in up to 50% of cases. Currently there are no approved antiviral therapies or vaccines for the treatment or prevention of HPS. A major obstacle in the development of effective medical countermeasures against highly pathogenic agents like the hantaviruses is recapitulating the human disease as closely as possible in an appropriate and reliable animal model. To date, the only animal model that resembles HPS in humans is the Syrian hamster model. Following infection with Andes virus, hamsters develop HPS-like disease which faithfully mimics the human condition with respect to incubation period and pathophysiology of disease. Perhaps most importantly, the sudden and rapid onset of severe respiratory distress observed in humans also occurs in hamsters. The last several years has seen an increase in studies utilizing the Andes virus hamster model which have provided unique insight into HPS pathogenesis as well as potential therapeutic and vaccine strategies to treat and prevent HPS. The purpose of this article is to review the current understanding of HPS disease progression in Syrian hamsters and discuss the suitability of utilizing this model to evaluate potential medical countermeasures against HPS. Copyright © 2012 Elsevier B.V. All rights reserved.
Wahlin, Karl J; Maruotti, Julien A; Sripathi, Srinivasa R; Ball, John; Angueyra, Juan M; Kim, Catherine; Grebe, Rhonda; Li, Wei; Jones, Bryan W; Zack, Donald J
2017-04-10
The retinal degenerative diseases, which together constitute a leading cause of hereditary blindness worldwide, are largely untreatable. Development of reliable methods to culture complex retinal tissues from human pluripotent stem cells (hPSCs) could offer a means to study human retinal development, provide a platform to investigate the mechanisms of retinal degeneration and screen for neuroprotective compounds, and provide the basis for cell-based therapeutic strategies. In this study, we describe an in vitro method by which hPSCs can be differentiated into 3D retinas with at least some important features reminiscent of a mature retina, including exuberant outgrowth of outer segment-like structures and synaptic ribbons, photoreceptor neurotransmitter expression, and membrane conductances and synaptic vesicle release properties consistent with possible photoreceptor synaptic function. The advanced outer segment-like structures reported here support the notion that 3D retina cups could serve as a model for studying mature photoreceptor development and allow for more robust modeling of retinal degenerative disease in vitro.
Theoretical considerations for mapping activation in human cardiac fibrillation
NASA Astrophysics Data System (ADS)
Rappel, Wouter-Jan; Narayan, Sanjiv M.
2013-06-01
Defining mechanisms for cardiac fibrillation is challenging because, in contrast to other arrhythmias, fibrillation exhibits complex non-repeatability in spatiotemporal activation but paradoxically exhibits conserved spatial gradients in rate, dominant frequency, and electrical propagation. Unlike animal models, in which fibrillation can be mapped at high spatial and temporal resolution using optical dyes or arrays of contact electrodes, mapping of cardiac fibrillation in patients is constrained practically to lower resolutions or smaller fields-of-view. In many animal models, atrial fibrillation is maintained by localized electrical rotors and focal sources. However, until recently, few studies had revealed localized sources in human fibrillation, so that the impact of mapping constraints on the ability to identify rotors or focal sources in humans was not described. Here, we determine the minimum spatial and temporal resolutions theoretically required to detect rigidly rotating spiral waves and focal sources, then extend these requirements for spiral waves in computer simulations. Finally, we apply our results to clinical data acquired during human atrial fibrillation using a novel technique termed focal impulse and rotor mapping (FIRM). Our results provide theoretical justification and clinical demonstration that FIRM meets the spatio-temporal resolution requirements to reliably identify rotors and focal sources for human atrial fibrillation.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
Ji, Jie; Hedelin, Anna; Malmlöf, Maria; Kessler, Vadim; Seisenbaeva, Gulaim; Gerde, Per; Palmberg, Lena
2017-01-01
Background Exposure to agents via inhalation is of great concerns both in workplace environment and in the daily contact with particles in the ambient air. Reliable human airway exposure systems will most likely replace animal experiment in future toxicity assessment studies of inhaled agents. Methods In this study, we successfully established a combination of an exposure system (XposeALI) with 3D models mimicking both healthy and chronic bronchitis-like mucosa by co-culturing human primary bronchial epithelial cells (PBEC) and fibroblast at air-liquid interface (ALI). Light-, confocal microscopy, scanning- and transmission electron microscopy, transepithelial electrical resistance (TEER) measurement and RT-PCR were performed to identify how the PBEC differentiated under ALI culture condition. Both models were exposed to palladium (Pd) nanoparticles which sized 6–10 nm, analogous to those released from modern car catalysts, at three different concentrations utilizing the XposeALI module of the PreciseInhale® exposure system. Results Exposing the 3D models to Pd nanoparticles induced increased secretion of IL-8, yet the chronic bronchitis-like model released significantly more IL-8 than the normal model. The levels of IL-8 in basal medium (BM) and apical lavage medium (AM) were in the same ranges, but the secretion of MMP-9 was significantly higher in the AM compared to the BM. Conclusion This combination of relevant human bronchial mucosa models and sophisticated exposure system can mimic in vivo conditions and serve as a useful alternative animal testing tool when studying adverse effects in humans exposed to aerosols, air pollutants or particles in an occupational setting. PMID:28107509
Human Rights Attitude Scale: A Validity and Reliability Study
ERIC Educational Resources Information Center
Ercan, Recep; Yaman, Tugba; Demir, Selcuk Besir
2015-01-01
The objective of this study is to develop a valid and reliable attitude scale having quality psychometric features that can measure secondary school students' attitudes towards human rights. The study group of the research is comprised by 710 6th, 7th and 8th grade students who study at 4 secondary schools in the centre of Sivas. The study group…
Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard
2017-01-01
The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.
Reliability model generator specification
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Mccann, Catherine
1990-01-01
The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.
Fernández Peruchena, Carlos M; Prado-Velasco, Manuel
2010-01-01
Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient's information to the models.A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies.
Fernández Peruchena, Carlos M; Prado-Velasco, Manuel
2010-01-01
Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient’s information to the models. A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies. PMID:21625646
A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.
ERIC Educational Resources Information Center
Wood, Terry M.; Safrit, Margaret J.
1987-01-01
A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…
Terrasso, Ana Paula; Pinto, Catarina; Serra, Margarida; Filipe, Augusto; Almeida, Susana; Ferreira, Ana Lúcia; Pedroso, Pedro; Brito, Catarina; Alves, Paula Marques
2015-07-10
There is an urgent need for new in vitro strategies to identify neurotoxic agents with speed, reliability and respect for animal welfare. Cell models should include distinct brain cell types and represent brain microenvironment to attain higher relevance. The main goal of this study was to develop and validate a human 3D neural model containing both neurons and glial cells, applicable for toxicity testing in high-throughput platforms. To achieve this, a scalable bioprocess for neural differentiation of human NTera2/cl.D1 cells in stirred culture systems was developed. Endpoints based on neuronal- and astrocytic-specific gene expression and functionality in 3D were implemented in multi-well format and used for toxicity assessment. The prototypical neurotoxicant acrylamide affected primarily neurons, impairing synaptic function; our results suggest that gene expression of the presynaptic marker synaptophysin can be used as sensitive endpoint. Chloramphenicol, described as neurotoxicant affected both cell types, with cytoskeleton markers' expression significantly reduced, particularly in astrocytes. In conclusion, a scalable and reproducible process for production of differentiated neurospheres enriched in mature neurons and functional astrocytes was obtained. This 3D approach allowed efficient production of large numbers of human differentiated neurospheres, which in combination with gene expression and functional endpoints are a powerful cell model to evaluate human neuronal and astrocytic toxicity. Copyright © 2014 Elsevier B.V. All rights reserved.
Customer-Driven Reliability Models for Multistate Coherent Systems
1992-01-01
AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE
Human Factors in Financial Trading
Leaver, Meghan; Reader, Tom W.
2016-01-01
Objective This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Background Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors–related issues in operational trading incidents. Method In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Results Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors–related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. Conclusion We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. Application This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. PMID:27142394
Animal models of cerebral ischemia
NASA Astrophysics Data System (ADS)
Khodanovich, M. Yu.; Kisel, A. A.
2015-11-01
Cerebral ischemia remains one of the most frequent causes of death and disability worldwide. Animal models are necessary to understand complex molecular mechanisms of brain damage as well as for the development of new therapies for stroke. This review considers a certain range of animal models of cerebral ischemia, including several types of focal and global ischemia. Since animal models vary in specificity for the human disease which they reproduce, the complexity of surgery, infarct size, reliability of reproduction for statistical analysis, and adequate models need to be chosen according to the aim of a study. The reproduction of a particular animal model needs to be evaluated using appropriate tools, including the behavioral assessment of injury and non-invasive and post-mortem control of brain damage. These problems also have been summarized in the review.
McClellan, Gene; Coleman, Margaret; Crary, David; Thurman, Alec; Thran, Brandolyn
2018-04-25
Military health risk assessors, medical planners, operational planners, and defense system developers require knowledge of human responses to doses of biothreat agents to support force health protection and chemical, biological, radiological, nuclear (CBRN) defense missions. This article reviews extensive data from 118 human volunteers administered aerosols of the bacterial agent Francisella tularensis, strain Schu S4, which causes tularemia. The data set includes incidence of early-phase febrile illness following administration of well-characterized inhaled doses of F. tularensis. Supplemental data on human body temperature profiles over time available from de-identified case reports is also presented. A unified, logically consistent model of early-phase febrile illness is described as a lognormal dose-response function for febrile illness linked with a stochastic time profile of fever. Three parameters are estimated from the human data to describe the time profile: incubation period or onset time for fever; rise time of fever; and near-maximum body temperature. Inhaled dose-dependence and variability are characterized for each of the three parameters. These parameters enable a stochastic model for the response of an exposed population through incorporation of individual-by-individual variability by drawing random samples from the statistical distributions of these three parameters for each individual. This model provides risk assessors and medical decisionmakers reliable representations of the predicted health impacts of early-phase febrile illness for as long as one week after aerosol exposures of human populations to F. tularensis. © 2018 Society for Risk Analysis.
Watanabe, Naohide; Nogawa, Masayuki; Ishiguro, Mariko; Maruyama, Hitomi; Shiba, Masayuki; Satake, Masahiro; Eto, Koji; Handa, Makoto
2017-08-01
To bridge the gap between in vitro function and clinical efficacy of platelet (PLT) transfusion products, reliable in vivo PLT functional assays for hemostasis and survival in animal models are required. However, there are no standardized methods for assessing the in vivo quality of transfused human PLTs. Plasma-depleted human PLT concentrates (PCs; Day 3, Day 5, Day 7, Day 10, and damaged) were transfused into busulfan-induced rabbits with thrombocytopenia with prolonged bleeding times 1 day after treatment with ethyl palmitate (EP) to block their reticuloendothelial systems. The hemostatic effect of PC transfusion was evaluated by the ear fine vein bleeding time. For the in vivo survival assay, splenectomized EP-treated rabbits were transfused with human PCs, and viability of the human PLTs in the rabbits was determined by flow cytometry using human PLT-specific antibodies and Trucount tubes. The hemostatic effect of PCs was slightly reduced with increasing storage periods for early time points, but more dramatically reduced for later time points. PLT survival was similar after 3 and 7 days of storage, but PLTs stored for 10 days showed significantly poorer survival than those stored only 3 days. Our new and improved protocol for in vivo assessment of transfused PLTs is sufficiently sensitive to detect subtle changes in hemostatic function and viability of human PLTs transfused into rabbit models. This protocol could contribute to preclinical in vivo functional assessment and clinical quality assurance of emerging novel PLT products such as cultured cell-derived human PLTs. © 2017 AABB.
Gupte, Amol; Buolamwini, John K
2009-01-15
3D-QSAR (CoMFA and CoMSIA) studies were performed on human equlibrative nucleoside transporter (hENT1) inhibitors displaying K(i) values ranging from 10,000 to 0.7nM. Both CoMFA and CoMSIA analysis gave reliable models with q2 values >0.50 and r2 values >0.92. The models have been validated for their stability and robustness using group validation and bootstrapping techniques and for their predictive abilities using an external test set of nine compounds. The high predictive r2 values of the test set (0.72 for CoMFA model and 0.74 for CoMSIA model) reveals that the models can prove to be a useful tool for activity prediction of newly designed nucleoside transporter inhibitors. The CoMFA and CoMSIA contour maps identify features important for exhibiting good binding affinities at the transporter, and can thus serve as a useful guide for the design of potential equilibrative nucleoside transporter inhibitors.
Raasch, Martin; Fritsche, Enrico; Kurtz, Andreas; Bauer, Michael; Mosig, Alexander S
2018-06-14
Complex cell culture models such as microphysiological models (MPS) mimicking human liver functionality in vitro are in the spotlight as alternative to conventional cell culture and animal models. Promising techniques like microfluidic cell culture or micropatterning by 3D bioprinting are gaining increasing importance for the development of MPS to address the needs for more predictivity and cost efficiency. In this context, human induced pluripotent stem cells (hiPSCs) offer new perspectives for the development of advanced liver-on-chip systems by recreating an in vivo like microenvironment that supports the reliable differentiation of hiPSCs to hepatocyte-like cells (HLC). In this review we will summarize current protocols of HLC generation and highlight recently established MPS suitable to resemble physiological hepatocyte function in vitro. In addition, we are discussing potential applications of liver MPS for disease modeling related to systemic or direct liver infections and the use of MPS in testing of new drug candidates. Copyright © 2018. Published by Elsevier B.V.
Advances in Reprogramming-Based Study of Neurologic Disorders
Baldwin, Kristin K.
2015-01-01
The technology to convert adult human non-neural cells into neural lineages, through induced pluripotent stem cells (iPSCs), somatic cell nuclear transfer, and direct lineage reprogramming or transdifferentiation has progressed tremendously in recent years. Reprogramming-based approaches aimed at manipulating cellular identity have enormous potential for disease modeling, high-throughput drug screening, cell therapy, and personalized medicine. Human iPSC (hiPSC)-based cellular disease models have provided proof of principle evidence of the validity of this system. However, several challenges remain before patient-specific neurons produced by reprogramming can provide reliable insights into disease mechanisms or be efficiently applied to drug discovery and transplantation therapy. This review will first discuss limitations of currently available reprogramming-based methods in faithfully and reproducibly recapitulating disease pathology. Specifically, we will address issues such as culture heterogeneity, interline and inter-individual variability, and limitations of two-dimensional differentiation paradigms. Second, we will assess recent progress and the future prospects of reprogramming-based neurologic disease modeling. This includes three-dimensional disease modeling, advances in reprogramming technology, prescreening of hiPSCs and creating isogenic disease models using gene editing. PMID:25749371
Molecular docking and 3D-QSAR studies on inhibitors of DNA damage signaling enzyme human PARP-1.
Fatima, Sabiha; Bathini, Raju; Sivan, Sree Kanth; Manga, Vijjulatha
2012-08-01
Poly (ADP-ribose) polymerase-1 (PARP-1) operates in a DNA damage signaling network. Molecular docking and three dimensional-quantitative structure activity relationship (3D-QSAR) studies were performed on human PARP-1 inhibitors. Docked conformation obtained for each molecule was used as such for 3D-QSAR analysis. Molecules were divided into a training set and a test set randomly in four different ways, partial least square analysis was performed to obtain QSAR models using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA). Derived models showed good statistical reliability that is evident from their r², q²(loo) and r²(pred) values. To obtain a consensus for predictive ability from all the models, average regression coefficient r²(avg) was calculated. CoMFA and CoMSIA models showed a value of 0.930 and 0.936, respectively. Information obtained from the best 3D-QSAR model was applied for optimization of lead molecule and design of novel potential inhibitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rugh, John P; Chaney, Larry; Hepokoski, Mark
2015-04-14
Reliable assessment of occupant thermal comfort can be difficult to obtain within automotive environments, especially under transient and asymmetric heating and cooling scenarios. Evaluation of HVAC system performance in terms of comfort commonly requires human subject testing, which may involve multiple repetitions, as well as multiple test subjects. Instrumentation (typically comprised of an array of temperature sensors) is usually only sparsely applied across the human body, significantly reducing the spatial resolution of available test data. Further, since comfort is highly subjective in nature, a single test protocol can yield a wide variation in results which can only be overcome bymore » increasing the number of test replications and subjects. In light of these difficulties, various types of manikins are finding use in automotive testing scenarios. These manikins can act as human surrogates from which local skin and core temperatures can be obtained, which are necessary for accurately predicting local and whole body thermal sensation and comfort using a physiology-based comfort model (e.g., the Berkeley Comfort Model). This paper evaluates two different types of manikins, i) an adaptive sweating thermal manikin, which is coupled with a human thermoregulation model, running in real-time, to obtain realistic skin temperatures; and, ii) a passive sensor manikin, which is used to measure boundary conditions as they would act on a human, from which skin and core temperatures can be predicted using a thermophysiological model. The simulated physiological responses and comfort obtained from both of these manikin-model coupling schemes are compared to those of a human subject within a vehicle cabin compartment transient heat-up scenario.« less
NASA Technical Reports Server (NTRS)
Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.
A unified model of the excitability of mouse sensory and motor axons.
Makker, Preet G S; Matamala, José Manuel; Park, Susanna B; Lees, Justin G; Kiernan, Matthew C; Burke, David; Moalem-Taylor, Gila; Howells, James
2018-06-19
Non-invasive nerve excitability techniques have provided valuable insight into the understanding of neurological disorders. The widespread use of mice in translational research on peripheral nerve disorders and by pharmaceutical companies during drug development requires valid and reliable models that can be compared to humans. This study established a novel experimental protocol that enables comparative assessment of the excitability properties of motor and sensory axons at the same site in mouse caudal nerve, compared the mouse data to data for motor and sensory axons in human median nerve at the wrist, and constructed a mathematical model of the excitability of mouse axons. In a separate study, ischaemia was employed as an experimental manoeuvre to test the translational utility of this preparation. The patterns of mouse sensory and motor excitability were qualitatively similar to human studies under normal and ischaemic conditions. The most conspicuous differences between mouse and human studies were observed in the recovery cycle and the response to hyperpolarization. Modelling showed that an increase in temperature in mouse axons could account for most of the differences in the recovery cycle. The modelling also suggested a larger hyperpolarization-activated conductance in mouse axons. The kinetics of this conductance appeared to be much slower raising the possibility that an additional or different hyperpolarization-activated cyclic-nucleotide gated (HCN) channel isoform underlies the accommodation to hyperpolarization in mouse axons. Given a possible difference in HCN isoforms, caution should be exercised in extrapolating from studies of mouse motor and sensory axons to human nerve disorders. This article is protected by copyright. All rights reserved.
Invertebrate Models for Coenzyme Q10 Deficiency
Fernández-Ayala, Daniel J.M.; Jiménez-Gancedo, Sandra; Guerra, Ignacio; Navas, Plácido
2014-01-01
The human syndrome of coenzyme Q (CoQ) deficiency is a heterogeneous mitochondrial disease characterized by a diminution of CoQ content in cells and tissues that affects all the electron transport processes CoQ is responsible for, like the electron transference in mitochondria for respiration and ATP production and the antioxidant capacity that it exerts in membranes and lipoproteins. Supplementation with external CoQ is the main attempt to address these pathologies, but quite variable results have been obtained ranging from little response to a dramatic recovery. Here, we present the importance of modeling human CoQ deficiencies in animal models to understand the genetics and the pathology of this disease, although the election of an organism is crucial and can sometimes be controversial. Bacteria and yeast harboring mutations that lead to CoQ deficiency are unable to grow if they have to respire but develop without any problems on media with fermentable carbon sources. The complete lack of CoQ in mammals causes embryonic lethality, whereas other mutations produce tissue-specific diseases as in humans. However, working with transgenic mammals is time and cost intensive, with no assurance of obtaining results. Caenorhabditis elegans and Drosophila melanogaster have been used for years as organisms to study embryonic development, biogenesis, degenerative pathologies, and aging because of the genetic facilities and the speed of working with these animal models. In this review, we summarize several attempts to model reliable human CoQ deficiencies in invertebrates, focusing on mutant phenotypes pretty similar to those observed in human patients. PMID:25126050
Farkas, Caroline M; Moeller, Michael D; Felder, Frank A; Henderson, Barron H; Carlton, Annmarie G
2016-08-02
On high electricity demand days, when air quality is often poor, regional transmission organizations (RTOs), such as PJM Interconnection, ensure reliability of the grid by employing peak-use electric generating units (EGUs). These "peaking units" are exempt from some federal and state air quality rules. We identify RTO assignment and peaking unit classification for EGUs in the Eastern U.S. and estimate air quality for four emission scenarios with the Community Multiscale Air Quality (CMAQ) model during the July 2006 heat wave. Further, we population-weight ambient values as a surrogate for potential population exposure. Emissions from electricity reliability networks negatively impact air quality in their own region and in neighboring geographic areas. Monitored and controlled PJM peaking units are generally located in economically depressed areas and can contribute up to 87% of hourly maximum PM2.5 mass locally. Potential population exposure to peaking unit PM2.5 mass is highest in the model domain's most populated cities. Average daily temperature and national gross domestic product steer peaking unit heat input. Air quality planning that capitalizes on a priori knowledge of local electricity demand and economics may provide a more holistic approach to protect human health within the context of growing energy needs in a changing world.
Wilson, Emily K
2012-01-01
Though better recognized for its immediate endeavors in human embryo research, the Carnegie Department of Embryology also employed a breeding colony of rhesus macaques for the purposes of studying human reproduction. This essay follows the course of the first enterprise in maintaining a primate colony for laboratory research and the overlapping scientific, social, and political circumstances that tolerated and cultivated the colony's continued operation from 1925 until 1971. Despite a new-found priority for reproductive sciences in the United States, by the early 1920s an unfertilized human ovum had not yet been seen and even the timing of ovulation remained unresolved. Progress would require an organized research approach that could extend beyond the limitations of working with scant and inherently restrictive human subjects or with common lab mammals like mice. In response, the Department of Embryology, under the Carnegie Institution of Washington (CIW), instituted a novel methodology using a particular primate species as a surrogate in studying normal human reproductive physiology. Over more than 40 years the monkey colony followed an unpremeditated trajectory that would contribute fundamentally to discoveries in human reproduction, early embryo development, reliable birth control methods, and to the establishment of the rhesus macaque as a common model organism.
A 3D Human Lung Tissue Model for Functional Studies on Mycobacterium tuberculosis Infection.
Braian, Clara; Svensson, Mattias; Brighenti, Susanna; Lerm, Maria; Parasa, Venkata R
2015-10-05
Tuberculosis (TB) still holds a major threat to the health of people worldwide, and there is a need for cost-efficient but reliable models to help us understand the disease mechanisms and advance the discoveries of new treatment options. In vitro cell cultures of monolayers or co-cultures lack the three-dimensional (3D) environment and tissue responses. Herein, we describe an innovative in vitro model of a human lung tissue, which holds promise to be an effective tool for studying the complex events that occur during infection with Mycobacterium tuberculosis (M. tuberculosis). The 3D tissue model consists of tissue-specific epithelial cells and fibroblasts, which are cultured in a matrix of collagen on top of a porous membrane. Upon air exposure, the epithelial cells stratify and secrete mucus at the apical side. By introducing human primary macrophages infected with M. tuberculosis to the tissue model, we have shown that immune cells migrate into the infected-tissue and form early stages of TB granuloma. These structures recapitulate the distinct feature of human TB, the granuloma, which is fundamentally different or not commonly observed in widely used experimental animal models. This organotypic culture method enables the 3D visualization and robust quantitative analysis that provides pivotal information on spatial and temporal features of host cell-pathogen interactions. Taken together, the lung tissue model provides a physiologically relevant tissue micro-environment for studies on TB. Thus, the lung tissue model has potential implications for both basic mechanistic and applied studies. Importantly, the model allows addition or manipulation of individual cell types, which thereby widens its use for modelling a variety of infectious diseases that affect the lungs.
The SAM framework: modeling the effects of management factors on human behavior in risk analysis.
Murphy, D M; Paté-Cornell, M E
1996-08-01
Complex engineered systems, such as nuclear reactors and chemical plants, have the potential for catastrophic failure with disastrous consequences. In recent years, human and management factors have been recognized as frequent root causes of major failures in such systems. However, classical probabilistic risk analysis (PRA) techniques do not account for the underlying causes of these errors because they focus on the physical system and do not explicitly address the link between components' performance and organizational factors. This paper describes a general approach for addressing the human and management causes of system failure, called the SAM (System-Action-Management) framework. Beginning with a quantitative risk model of the physical system, SAM expands the scope of analysis to incorporate first the decisions and actions of individuals that affect the physical system. SAM then links management factors (incentives, training, policies and procedures, selection criteria, etc.) to those decisions and actions. The focus of this paper is on four quantitative models of action that describe this last relationship. These models address the formation of intentions for action and their execution as a function of the organizational environment. Intention formation is described by three alternative models: a rational model, a bounded rationality model, and a rule-based model. The execution of intentions is then modeled separately. These four models are designed to assess the probabilities of individual actions from the perspective of management, thus reflecting the uncertainties inherent to human behavior. The SAM framework is illustrated for a hypothetical case of hazardous materials transportation. This framework can be used as a tool to increase the safety and reliability of complex technical systems by modifying the organization, rather than, or in addition to, re-designing the physical system.
Inum, Reefat; Rana, Md Masud; Shushama, Kamrun Nahar; Quader, Md Anwarul
2018-01-01
A microwave brain imaging system model is envisaged to detect and visualize tumor inside the human brain. A compact and efficient microstrip patch antenna is used in the imaging technique to transmit equivalent signal and receive backscattering signal from the stratified human head model. Electromagnetic band gap (EBG) structure is incorporated on the antenna ground plane to enhance the performance. Rectangular and circular EBG structures are proposed to investigate the antenna performance. Incorporation of circular EBG on the antenna ground plane provides an improvement of 22.77% in return loss, 5.84% in impedance bandwidth, and 16.53% in antenna gain with respect to the patch antenna with rectangular EBG. The simulation results obtained from CST are compared to those obtained from HFSS to validate the design. Specific absorption rate (SAR) of the modeled head tissue for the proposed antenna is determined. Different SAR values are compared with the established standard SAR limit to provide a safety regulation of the imaging system. A monostatic radar-based confocal microwave imaging algorithm is applied to generate the image of tumor inside a six-layer human head phantom model. S -parameter signals obtained from circular EBG loaded patch antenna in different scanning modes are utilized in the imaging algorithm to effectively produce a high-resolution image which reliably indicates the presence of tumor inside human brain.
Rana, Md. Masud; Shushama, Kamrun Nahar; Quader, Md. Anwarul
2018-01-01
A microwave brain imaging system model is envisaged to detect and visualize tumor inside the human brain. A compact and efficient microstrip patch antenna is used in the imaging technique to transmit equivalent signal and receive backscattering signal from the stratified human head model. Electromagnetic band gap (EBG) structure is incorporated on the antenna ground plane to enhance the performance. Rectangular and circular EBG structures are proposed to investigate the antenna performance. Incorporation of circular EBG on the antenna ground plane provides an improvement of 22.77% in return loss, 5.84% in impedance bandwidth, and 16.53% in antenna gain with respect to the patch antenna with rectangular EBG. The simulation results obtained from CST are compared to those obtained from HFSS to validate the design. Specific absorption rate (SAR) of the modeled head tissue for the proposed antenna is determined. Different SAR values are compared with the established standard SAR limit to provide a safety regulation of the imaging system. A monostatic radar-based confocal microwave imaging algorithm is applied to generate the image of tumor inside a six-layer human head phantom model. S-parameter signals obtained from circular EBG loaded patch antenna in different scanning modes are utilized in the imaging algorithm to effectively produce a high-resolution image which reliably indicates the presence of tumor inside human brain. PMID:29623087
Is the encoding of Reward Prediction Error reliable during development?
Keren, Hanna; Chen, Gang; Benson, Brenda; Ernst, Monique; Leibenluft, Ellen; Fox, Nathan A; Pine, Daniel S; Stringaris, Argyris
2018-05-16
Reward Prediction Errors (RPEs), defined as the difference between the expected and received outcomes, are integral to reinforcement learning models and play an important role in development and psychopathology. In humans, RPE encoding can be estimated using fMRI recordings, however, a basic measurement property of RPE signals, their test-retest reliability across different time scales, remains an open question. In this paper, we examine the 3-month and 3-year reliability of RPE encoding in youth (mean age at baseline = 10.6 ± 0.3 years), a period of developmental transitions in reward processing. We show that RPE encoding is differentially distributed between the positive values being encoded predominantly in the striatum and negative RPEs primarily encoded in the insula. The encoding of negative RPE values is highly reliable in the right insula, across both the long and the short time intervals. Insula reliability for RPE encoding is the most robust finding, while other regions, such as the striatum, are less consistent. Striatal reliability appeared significant as well once covarying for factors, which were possibly confounding the signal to noise ratio. By contrast, task activation during feedback in the striatum is highly reliable across both time intervals. These results demonstrate the valence-dependent differential encoding of RPE signals between the insula and striatum, and the consistency of RPE signals or lack thereof, during childhood and into adolescence. Characterizing the regions where the RPE signal in BOLD fMRI is a reliable marker is key for estimating reward-processing alterations in longitudinal designs, such as developmental or treatment studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-01-01
Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573
ORDEM 3.0 and MASTER-2009 Modeled Small Debris Population Comparison
NASA Technical Reports Server (NTRS)
Krisko, P. H.; Flegel, S.
2012-01-01
The latest versions of the two premier orbital debris engineering models, NASA's ORDEM 3.0 and ESA's MASTER-2009, have been publicly released within the last year. Both models have gone through significant advancements since inception, and now represent the state-of-the-art in orbital debris knowledge of their respective agencies. The purpose of these models is to provide satellite designers/operators and debris researchers with reliable estimates of the artificial debris environment in near-Earth orbit. The small debris environment within the size range of 1 mm to 1 cm is of particular interest to both human and robotic spacecraft programs. These objects are much more numerous than larger trackable debris but are still large enough to cause significant, if not catastrophic, damage to spacecraft upon impact. They are also small enough to elude routine detection by existing observation systems (radar and telescope). Without reliable detection the modeling of these populations has always coupled theoretical origins with supporting observational data in different degrees. This paper describes the population generation and categorization of both ORDEM 3.0 and MASTER-2009; their sources (both known and presumed), current supporting data and theory, and methods of population verification. Fluxes on spacecraft for chosen orbits are presented and discussed. Future collaborative analysis is noted.
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
Development of an Integrated Human Factors Toolkit
NASA Technical Reports Server (NTRS)
Resnick, Marc L.
2003-01-01
An effective integration of human abilities and limitations is crucial to the success of all NASA missions. The Integrated Human Factors Toolkit facilitates this integration by assisting system designers and analysts to select the human factors tools that are most appropriate for the needs of each project. The HF Toolkit contains information about a broad variety of human factors tools addressing human requirements in the physical, information processing and human reliability domains. Analysis of each tool includes consideration of the most appropriate design stage, the amount of expertise in human factors that is required, the amount of experience with the tool and the target job tasks that are needed, and other factors that are critical for successful use of the tool. The benefits of the Toolkit include improved safety, reliability and effectiveness of NASA systems throughout the agency. This report outlines the initial stages of development for the Integrated Human Factors Toolkit.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Space Shuttle Reusable Solid Rocket Motor
NASA Technical Reports Server (NTRS)
Moore, Dennis; Phelps, Jack; Perkins, Fred
2010-01-01
RSRM is a highly reliable human-rated Solid Rocket Motor: a) Largest diameter SRM to achieve flight status; b) Only human-rated SRM. RSRM reliability achieved by: a)Applying special attention to Process Control, Testing, and Postflight; b) Communicating often; c) Identifying and addressing issues in a disciplined approach; d) Identifying and fully dispositioning "out-of-family" conditions; e) Addressing minority opinions; and f) Learning our lessons.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Nathalie; Hejazi, Mohamad I.; Leung, L. Ruby
To advance understanding of the interactions between human activities and the water cycle, an integrated terrestrial water cycle component has been developed for Earth system models. This includes a land surface model fully coupled to a river routing model and a generic water management model to simulate natural and regulated flows. A global integrated assessment model and its regionalized version for the U.S. are used to simulate water demand consistent with the energy technology and socio-economics scenarios. Human influence on the hydrologic cycle includes regulation and storage from reservoirs, consumptive use and withdrawal from multiple sectors ( irrigation and non-irrigation)more » and overall redistribution of water resources in space and time. As groundwater provides an important source of water supply for irrigation and other uses, the integrated modeling framework has been extended with a simplified representation of groundwater as an additional supply source, and return flow generated from differences between withdrawals and consumptive uses from both groundwater and surface water systems. The groundwater supply and return flow modules are evaluated by analyzing the simulated regulated flow, reservoir storage and supply deficit for irrigation and non-irrigation sectors over major hydrologic regions of the conterminous U.S. The modeling framework is then used to provide insights on the reliability of water resources by isolating the reliability due to return flow and/or groundwater sources of water. Our results show that high sectoral ratio of withdrawals over consumptive demand adds significant stress on the water resources management that can be alleviated by reservoir storage capacity. The return flow representation therefore exhibits a clear east-west contrast in its hydrologic signature, as well as in its ability to help meet water demand. Groundwater use has a limited hydrologic signature but the most pronounced signature is in terms of decreasing water supply deficit. The combined return flow and groundwater use signature conserves the east-west constrast with overall uncertainties due to the groundwater-return flow representation, varying ratios combined with different hydroclimate conditions, storage infrastructures, sectoral water uses and dependence on groundwater. The redistribution of surface and groundwater by human activities, and the uncertainties in their representation have important implications to the water and energy balances in the Earth system and land-atmosphere interactions.« less
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
The role of in vitro methods as alternatives to animals in toxicity testing.
Anadón, Arturo; Martínez, María Aranzazu; Castellano, Victor; Martínez-Larrañaga, María Rosa
2014-01-01
It is accepted that animal testing should be reduced, refined or replaced as far as it is practicably possible. There are also a wide variety of in vitro models, which are used as screening studies and mechanistic investigations. The ability of an in vitro assay to be reliable, biomedically, is essential in pharmaceutical development. Furthermore, it is necessary that cells used in in vitro testing mimic the phenotype of cells within the human target tissue. The focus of this review article is to identify the key points of in vitro assays. In doing so, the authors take into account the chemical agents that are assessed and the integrated in vitro testing strategies. There is a transfer of toxicological data from primary in vivo animal studies to in vitro assays. The key element for designing an integrated in vitro testing strategy is summarized as follows: exposure modeling of chemical agents for in vitro testing; data gathering, sharing and read-across for testing a class of chemical; a battery of tests to assemble a broad spectrum of data on different mechanisms of action to predict toxic effects; and applicability of the test and the integrated in vitro testing strategies and flexibility to adjust the integrated in vitro testing strategies to test substance. While these methods will be invaluable if effective, more studies must be done to ensure reliability and suitability of these tests for humans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring
In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously beenmore » a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.« less
Braun, Katharina; Böhnke, Frank; Stark, Thomas
2012-06-01
We present a complete geometric model of the human cochlea, including the segmentation and reconstruction of the fluid-filled chambers scala tympani and scala vestibuli, the lamina spiralis ossea and the vibrating structure (cochlear partition). Future fluid-structure coupled simulations require a reliable geometric model of the cochlea. The aim of this study was to present an anatomical model of the human cochlea, which can be used for further numerical calculations. Using high resolution micro-computed tomography (µCT), we obtained images of a cut human temporal bone with a spatial resolution of 5.9 µm. Images were manually segmented to obtain the three-dimensional reconstruction of the cochlea. Due to the high resolution of the µCT data, a detailed examination of the geometry of the twisted cochlear partition near the oval and the round window as well as the precise illustration of the helicotrema was possible. After reconstruction of the lamina spiralis ossea, the cochlear partition and the curved geometry of the scala vestibuli and the scala tympani were presented. The obtained data sets were exported as standard lithography (stl) files. These files represented a complete framework for future numerical simulations of mechanical (acoustic) wave propagation on the cochlear partition in the form of mathematical mechanical cochlea models. Additional quantitative information concerning heights, lengths and volumes of the scalae was found and compared with previous results.
Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad
2016-01-01
Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162
A modeling framework for exposing risks in complex systems.
Sharit, J
2000-08-01
This article introduces and develops a modeling framework for exposing risks in the form of human errors and adverse consequences in high-risk systems. The modeling framework is based on two components: a two-dimensional theory of accidents in systems developed by Perrow in 1984, and the concept of multiple system perspectives. The theory of accidents differentiates systems on the basis of two sets of attributes. One set characterizes the degree to which systems are interactively complex; the other emphasizes the extent to which systems are tightly coupled. The concept of multiple perspectives provides alternative descriptions of the entire system that serve to enhance insight into system processes. The usefulness of these two model components derives from a modeling framework that cross-links them, enabling a variety of work contexts to be exposed and understood that would otherwise be very difficult or impossible to identify. The model components and the modeling framework are illustrated in the case of a large and comprehensive trauma care system. In addition to its general utility in the area of risk analysis, this methodology may be valuable in applications of current methods of human and system reliability analysis in complex and continually evolving high-risk systems.
Measuring human remains in the field: Grid technique, total station, or MicroScribe?
Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel
2012-09-10
Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
NASA Astrophysics Data System (ADS)
Taub, Marc Barry
Transdermal drug delivery is an alternative approach to the systemic delivery of pharmaceuticals where drugs are administered through the skin and absorbed percutaneously. This method of delivery offers several advantages over more traditional routes; most notably, the avoidance of the fast-pass metabolism of the liver and gut, the ability to offer controlled release rates, and the possibility for novel devices. Pressure sensitive adhesives (PSAs) are used to bond transdermal drug delivery devices to the skin because of their good initial and long-term adhesion, clean removability, and skin and drug compatibility. However, an understanding of the mechanics of adhesion to the dermal layer, together with quantitative and reproducible test methods for measuring adhesion, have been lacking. This study utilizes a mechanics-based approach to quantify the interfacial adhesion of PSAs bonded to selected substrates, including human dermal tissue. The delamination of PSA layers is associated with cavitation in the PSA followed by the formation of an extensive cohesive zone behind the debond tip. A quantitative metrology was developed to assess the adhesion and delamination of PSAs, such that it could be possible to easily distinguish between the adhesive characteristics of different PSA compositions and to provide a quantitative basis from which the reliability of adhesive layers bonded to substrates could be studied. A mechanics-based model was also developed to predict debonding in terms of the relevant energy dissipation mechanisms active during this process. As failure of transdermal devices may occur cohesively within the PSA layer, adhesively at the interface between the PSA and the skin, or cohesively between the corneocytes that comprise the outermost layer of the skin, it was also necessary to explore the mechanical and fracture properties of human skin. The out-of-plane delamination of corneocytes was studied by determining the strain energy release rate during debonding of cantilever-beam specimens containing thin layers of human dermal tissue at their midline. Finally, the interfacial adhesion of PSAs bonded to human skin was studied and the mechanics model that was developed for PSA failure was extended to provide the capability for in vivo reliability predictions for transdermal systems bonded to human skin.
Asher, Lucy; Furrer, Sibylle; Lechner, Isabel; Würbel, Hanno; Melotti, Luca
2017-01-01
In humans, the personality dimension ‘sensory processing sensitivity (SPS)’, also referred to as “high sensitivity”, involves deeper processing of sensory information, which can be associated with physiological and behavioral overarousal. However, it has not been studied up to now whether this dimension also exists in other species. SPS can influence how people perceive the environment and how this affects them, thus a similar dimension in animals would be highly relevant with respect to animal welfare. We therefore explored whether SPS translates to dogs, one of the primary model species in personality research. A 32-item questionnaire to assess the “highly sensitive dog score” (HSD-s) was developed based on the “highly sensitive person” (HSP) questionnaire. A large-scale, international online survey was conducted, including the HSD questionnaire, as well as questions on fearfulness, neuroticism, “demographic” (e.g. dog sex, age, weight; age at adoption, etc.) and “human” factors (e.g. owner age, sex, profession, communication style, etc.), and the HSP questionnaire. Data were analyzed using linear mixed effect models with forward stepwise selection to test prediction of HSD-s by the above-mentioned factors, with country of residence and dog breed treated as random effects. A total of 3647 questionnaires were fully completed. HSD-, fearfulness, neuroticism and HSP-scores showed good internal consistencies, and HSD-s only moderately correlated with fearfulness and neuroticism scores, paralleling previous findings in humans. Intra- (N = 447) and inter-rater (N = 120) reliabilities were good. Demographic and human factors, including HSP score, explained only a small amount of the variance of HSD-s. A PCA analysis identified three subtraits of SPS, comparable to human findings. Overall, the measured personality dimension in dogs showed good internal consistency, partial independence from fearfulness and neuroticism, and good intra- and inter-rater reliability, indicating good construct validity of the HSD questionnaire. Human and demographic factors only marginally affected the HSD-s suggesting that, as hypothesized for human SPS, a genetic basis may underlie this dimension within the dog species. PMID:28520773
NASA Astrophysics Data System (ADS)
Ndu, Obibobi Kamtochukwu
To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.
Use of limited data to construct Bayesian networks for probabilistic risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groth, Katrina M.; Swiler, Laura Painton
2013-03-01
Probabilistic Risk Assessment (PRA) is a fundamental part of safety/quality assurance for nuclear power and nuclear weapons. Traditional PRA very effectively models complex hardware system risks using binary probabilistic models. However, traditional PRA models are not flexible enough to accommodate non-binary soft-causal factors, such as digital instrumentation&control, passive components, aging, common cause failure, and human errors. Bayesian Networks offer the opportunity to incorporate these risks into the PRA framework. This report describes the results of an early career LDRD project titled %E2%80%9CUse of Limited Data to Construct Bayesian Networks for Probabilistic Risk Assessment%E2%80%9D. The goal of the work was tomore » establish the capability to develop Bayesian Networks from sparse data, and to demonstrate this capability by producing a data-informed Bayesian Network for use in Human Reliability Analysis (HRA) as part of nuclear power plant Probabilistic Risk Assessment (PRA). This report summarizes the research goal and major products of the research.« less
Joo, Kyeung Min; Kim, Jinkuk; Jin, Juyoun; Kim, Misuk; Seol, Ho Jun; Muradov, Johongir; Yang, Heekyoung; Choi, Yoon-La; Park, Woong-Yang; Kong, Doo-Sik; Lee, Jung-Il; Ko, Young-Hyeh; Woo, Hyun Goo; Lee, Jeongwu; Kim, Sunghoon; Nam, Do-Hyun
2013-01-31
Frequent discrepancies between preclinical and clinical results of anticancer agents demand a reliable translational platform that can precisely recapitulate the biology of human cancers. Another critical unmet need is the ability to predict therapeutic responses for individual patients. Toward this goal, we have established a library of orthotopic glioblastoma (GBM) xenograft models using surgical samples of GBM patients. These patient-specific GBM xenograft tumors recapitulate histopathological properties and maintain genomic characteristics of parental GBMs in situ. Furthermore, in vivo irradiation, chemotherapy, and targeted therapy of these xenograft tumors mimic the treatment response of parental GBMs. We also found that establishment of orthotopic xenograft models portends poor prognosis of GBM patients and identified the gene signatures and pathways signatures associated with the clinical aggressiveness of GBMs. Together, the patient-specific orthotopic GBM xenograft library represent the preclinically and clinically valuable "patient tumor's phenocopy" that represents molecular and functional heterogeneity of GBMs. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Molecular and clinical implementations of ovarian cancer mouse avatar models.
Zayed, Amira A; Mandrekar, Sumithra J; Haluska, Paul
2015-09-01
Innovation in oncology drug development has been hindered by lack of preclinical models that reliably predict clinical activity of novel therapies in cancer patients. Increasing desire for individualize treatment of patients with cancer has led to an increase in the use of patient-derived xenografts (PDX) engrafted into immune-compromised mice for preclinical modeling. Large numbers of tumor-specific PDX models have been established and proved to be powerful tools in pre-clinical testing. A subset of PDXs, referred to as Avatars, establish tumors in an orthotopic and treatment naïve fashion that may represent the most clinical relevant model of individual human cancers. This review will discuss ovarian cancer (OC) PDX models demonstrating the opportunities and limitations of these models in cancer drug development, and describe concepts of clinical trials design in Avatar guided therapy.
Optical spectroscopy for quantitative sensing in human pancreatic tissues
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Chandra, Malavika; Lloyd, William; Chen, Leng-Chun; Scheiman, James; Simeone, Diane; McKenna, Barbara; Mycek, Mary-Ann
2011-07-01
Pancreatic adenocarcinoma has a five-year survival rate of only 6%, largely because current diagnostic methods cannot reliably detect the disease in its early stages. Reflectance and fluorescence spectroscopies have the potential to provide quantitative, minimally-invasive means of distinguishing pancreatic adenocarcinoma from normal pancreatic tissue and chronic pancreatitis. The first collection of wavelength-resolved reflectance and fluorescence spectra and time-resolved fluorescence decay curves from human pancreatic tissues was acquired with clinically-compatible instrumentation. Mathematical models of reflectance and fluorescence extracted parameters related to tissue morphology and biochemistry that were statistically significant for distinguishing between pancreatic tissue types. These results suggest that optical spectroscopy has the potential to detect pancreatic disease in a clinical setting.
Boumans, L J; Rodenburg, M; Maas, A J
1983-01-01
The response of the human vestibulo-ocular reflex system to a constant angular acceleration is calculated using a second order model with an adaptation term. After first reaching a maximum the peracceleratory response declines. When the stimulus duration is long the decay is mainly governed by the adaptation time constant Ta, which enables to reliably estimate this time constant. In the postacceleratory period of constant velocity there is a reversal in response. The magnitude and the time course of the per- and postacceleratory response are calculated for various values of the cupular time constant T1, the adaptation time constant Ta, and the stimulus duration, thus enabling their influence to be assessed.
Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2010-01-01
A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…
Roadmap to a Sustainable Structured Trusted Employee Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coates, Cameron W; Eisele, Gerhard R
2013-08-01
Organizations (facility, regulatory agency, or country) have a compelling interest in ensuring that individuals who occupy sensitive positions affording access to chemical biological, radiological and nuclear (CBRN) materials facilities and programs are functioning at their highest level of reliability. Human reliability and human performance relate not only to security but also focus on safety. Reliability has a logical and direct relationship to trustworthiness for the organization is placing trust in their employees to conduct themselves in a secure, safe, and dependable manner. This document focuses on providing an organization with a roadmap to implementing a successful and sustainable Structured Trustedmore » Employee Program (STEP).« less
Reliable results from stochastic simulation models
Donald L., Jr. Gochenour; Leonard R. Johnson
1973-01-01
Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...
Digital music exposure reliably induces temporary threshold shift in normal-hearing human subjects.
Le Prell, Colleen G; Dell, Shawna; Hensley, Brittany; Hall, James W; Campbell, Kathleen C M; Antonelli, Patrick J; Green, Glenn E; Miller, James M; Guire, Kenneth
2012-01-01
One of the challenges for evaluating new otoprotective agents for potential benefit in human populations is the availability of an established clinical paradigm with real-world relevance. These studies were explicitly designed to develop a real-world digital music exposure that reliably induces temporary threshold shift (TTS) in normal-hearing human subjects. Thirty-three subjects participated in studies that measured effects of digital music player use on hearing. Subjects selected either rock or pop music, which was then presented at 93 to 95 (n = 10), 98 to 100 (n = 11), or 100 to 102 (n = 12) dBA in-ear exposure level for a period of 4 hr. Audiograms and distortion product otoacoustic emissions (DPOAEs) were measured before and after music exposure. Postmusic tests were initiated 15 min, 1 hr 15 min, 2 hr 15 min, and 3 hr 15 min after the exposure ended. Additional tests were conducted the following day and 1 week later. Changes in thresholds after the lowest-level exposure were difficult to distinguish from test-retest variability; however, TTS was reliably detected after higher levels of sound exposure. Changes in audiometric thresholds had a "notch" configuration, with the largest changes observed at 4 kHz (mean = 6.3 ± 3.9 dB; range = 0-14 dB). Recovery was largely complete within the first 4 hr postexposure, and all subjects showed complete recovery of both thresholds and DPOAE measures when tested 1 week postexposure. These data provide insight into the variability of TTS induced by music-player use in a healthy, normal-hearing, young adult population, with music playlist, level, and duration carefully controlled. These data confirm the likelihood of temporary changes in auditory function after digital music-player use. Such data are essential for the development of a human clinical trial protocol that provides a highly powered design for evaluating novel therapeutics in human clinical trials. Care must be taken to fully inform potential subjects in future TTS studies, including protective agent evaluations, that some noise exposures have resulted in neural degeneration in animal models, even when both audiometric thresholds and DPOAE levels returned to pre-exposure values.
Benchmarking Terrestrial Ecosystem Models in the South Central US
NASA Astrophysics Data System (ADS)
Kc, M.; Winton, K.; Langston, M. A.; Luo, Y.
2016-12-01
Ecosystem services and products are the foundation of sustainability for regional and global economy since we are directly or indirectly dependent on the ecosystem services like food, livestock, water, air, wildlife etc. It has been increasingly recognized that for sustainability concerns, the conservation problems need to be addressed in the context of entire ecosystems. This approach is even more vital in the 21st century with formidable increasing human population and rapid changes in global environment. This study was conducted to find the state of the science of ecosystem models in the South-Central region of US. The ecosystem models were benchmarked using ILAMB diagnostic package developed as a result of International Land Model Benchmarking (ILAMB) project on four main categories; viz, Ecosystem and Carbon Cycle, Hydrology Cycle, Radiation and Energy Cycle and Climate forcings. A cumulative assessment was generated with weighted seven different skill assessment metrics for the ecosystem models. This synthesis on the current state of the science of ecosystem modeling in the South-Central region of US will be highly useful towards coupling these models with climate, agronomic, hydrologic, economic or management models to better represent ecosystem dynamics as affected by climate change and human activities; and hence gain more reliable predictions of future ecosystem functions and service in the region. Better understandings of such processes will increase our ability to predict the ecosystem responses and feedbacks to environmental and human induced change in the region so that decision makers can make an informed management decisions of the ecosystem.
Kamimura, Hidetaka; Ito, Satoshi; Chijiwa, Hiroyuki; Okuzono, Takeshi; Ishiguro, Tomohiro; Yamamoto, Yosuke; Nishinoaki, Sho; Ninomiya, Shin-Ichi; Mitsui, Marina; Kalgutkar, Amit S; Yamazaki, Hiroshi; Suemizu, Hiroshi
2017-05-01
1. The partial glucokinase activator N,N-dimethyl-5-((2-methyl-6-((5-methylpyrazin-2-yl)carbamoyl)benzofuran-4-yl)oxy)pyrimidine-2-carboxamide (PF-04937319) is biotransformed in humans to N-methyl-5-((2-methyl-6-((5-methylpyrazin-2-yl)carbamoyl)benzofuran-4-yl)oxy)pyrimidine-2-carboxamide (M1), accounting for ∼65% of total exposure at steady state. 2. As the disproportionately abundant nature of M1 could not be reliably predicted from in vitro metabolism studies, we evaluated a chimeric mouse model with humanized liver on TK-NOG background for its ability to retrospectively predict human disposition of PF-04937319. Since livers of chimeric mice were enlarged by hyperplasia and contained remnant mouse hepatocytes, hepatic intrinsic clearances normalized for liver weight, metabolite formation and liver to plasma concentration ratios were plotted against the replacement index by human hepatocytes and extrapolated to those in the virtual chimeric mouse with 100% humanized liver. 3. Semi-physiological pharmacokinetic analyses using the above parameters revealed that simulated concentration curves of PF-04937319 and M1 were approximately superimposed with the observed clinical data in humans. 4. Finally, qualitative profiling of circulating metabolites in humanized chimeric mice dosed with PF-04937319 or M1 also revealed the presence of a carbinolamide metabolite, identified in the clinical study as a human-specific metabolite. The case study demonstrates that humanized chimeric mice may be potentially useful in preclinical discovery towards studying disproportionate or human-specific metabolism of drug candidates.
NASA Advanced Exploration Systems: Advancements in Life Support Systems
NASA Technical Reports Server (NTRS)
Shull, Sarah A.; Schneider, Walter F.
2016-01-01
The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA’s Habitability Architecture Team.
Allers, Carolina; Sierralta, Walter D; Neubauer, Sonia; Rivera, Francisco; Minguell, José J; Conget, Paulette A
2004-08-27
The use of mesenchymal stem cells (MSC) for cell therapy relies on their capacity to engraft and survive long-term in the appropriate target tissue(s). Animal models have demonstrated that the syngeneic or xenogeneic transplantation of MSC results in donor engraftment into the bone marrow and other tissues of conditioned recipients. However, there are no reliable data showing the fate of human MSC infused into conditioned or unconditioned adult recipients. In the present study, the authors investigated, by using imaging, polymerase chain reaction (PCR), and in situ hybridization, the biodistribution of human bone marrow-derived MSC after intravenous infusion into unconditioned adult nude mice. As assessed by imaging (gamma camera), PCR, and in situ hybridization analysis, the authors' results demonstrate the presence of human MSC in bone marrow, spleen, and mesenchymal tissues of recipient mice. These results suggest that human MSC transplantation into unconditioned recipients represents an option for providing cellular therapy and avoids the complications associated with drugs or radiation conditioning.
NASA Astrophysics Data System (ADS)
Xia, Quan; Wang, Zili; Ren, Yi; Sun, Bo; Yang, Dezhen; Feng, Qiang
2018-05-01
With the rapid development of lithium-ion battery technology in the electric vehicle (EV) industry, the lifetime of the battery cell increases substantially; however, the reliability of the battery pack is still inadequate. Because of the complexity of the battery pack, a reliability design method for a lithium-ion battery pack considering the thermal disequilibrium is proposed in this paper based on cell redundancy. Based on this method, a three-dimensional electric-thermal-flow-coupled model, a stochastic degradation model of cells under field dynamic conditions and a multi-state system reliability model of a battery pack are established. The relationships between the multi-physics coupling model, the degradation model and the system reliability model are first constructed to analyze the reliability of the battery pack and followed by analysis examples with different redundancy strategies. By comparing the reliability of battery packs of different redundant cell numbers and configurations, several conclusions for the redundancy strategy are obtained. More notably, the reliability does not monotonically increase with the number of redundant cells for the thermal disequilibrium effects. In this work, the reliability of a 6 × 5 parallel-series configuration is the optimal system structure. In addition, the effect of the cell arrangement and cooling conditions are investigated.
Atkinson, Samuel F; Sarkar, Sahotra; Aviña, Aldo; Schuermann, Jim A; Williamson, Phillip
2012-11-01
The spatial distribution of Dermacentor variabilis, the most commonly identified vector of the bacterium Rickettsia rickettsii which causes Rocky Mountain spotted fever (RMSF) in humans, and the spatial distribution of RMSF, have not been previously studied in the south central United States of America, particularly in Texas. From an epidemiological perspective, one would tend to hypothesise that there would be a high degree of spatial concordance between the habitat suitability for the tick and the incidence of the disease. Both maximum-entropy modelling of the tick's habitat suitability and spatially adaptive filters modelling of the human incidence of RMSF disease provide reliable portrayals of the spatial distributions of these phenomenons. Even though rates of human cases of RMSF in Texas and rates of Dermacentor ticks infected with Rickettsia bacteria are both relatively low in Texas, the best data currently available allows a preliminary indication that the assumption of high levels of spatial concordance would not be correct in Texas (Kappa coefficient of agreement = 0.17). It will take substantially more data to provide conclusive findings, and to understand the results reported here, but this study provides an approach to begin understanding the discrepancy.
Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective
Mattout, Jérémie
2012-01-01
A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291
Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method
Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan
2015-07-29
Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less
BioQ: tracing experimental origins in public genomic databases using a novel data provenance model.
Saccone, Scott F; Quan, Jiaxi; Jones, Peter L
2012-04-15
Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. BioQ is freely available to the public at http://bioq.saclab.net.
Fused cerebral organoids model interactions between brain regions.
Bagley, Joshua A; Reumann, Daniel; Bian, Shan; Lévi-Strauss, Julie; Knoblich, Juergen A
2017-07-01
Human brain development involves complex interactions between different regions, including long-distance neuronal migration or formation of major axonal tracts. Different brain regions can be cultured in vitro within 3D cerebral organoids, but the random arrangement of regional identities limits the reliable analysis of complex phenotypes. Here, we describe a coculture method combining brain regions of choice within one organoid tissue. By fusing organoids of dorsal and ventral forebrain identities, we generate a dorsal-ventral axis. Using fluorescent reporters, we demonstrate CXCR4-dependent GABAergic interneuron migration from ventral to dorsal forebrain and describe methodology for time-lapse imaging of human interneuron migration. Our results demonstrate that cerebral organoid fusion cultures can model complex interactions between different brain regions. Combined with reprogramming technology, fusions should offer researchers the possibility to analyze complex neurodevelopmental defects using cells from neurological disease patients and to test potential therapeutic compounds.
Intraamniotic Zika virus inoculation of pregnant rhesus macaques produces fetal neurologic disease.
Coffey, Lark L; Keesler, Rebekah I; Pesavento, Patricia A; Woolard, Kevin; Singapuri, Anil; Watanabe, Jennifer; Cruzen, Christina; Christe, Kari L; Usachenko, Jodie; Yee, JoAnn; Heng, Victoria A; Bliss-Moreau, Eliza; Reader, J Rachel; von Morgenland, Wilhelm; Gibbons, Anne M; Jackson, Kenneth; Ardeshir, Amir; Heimsath, Holly; Permar, Sallie; Senthamaraikannan, Paranthaman; Presicce, Pietro; Kallapur, Suhas G; Linnen, Jeffrey M; Gao, Kui; Orr, Robert; MacGill, Tracy; McClure, Michelle; McFarland, Richard; Morrison, John H; Van Rompay, Koen K A
2018-06-20
Zika virus (ZIKV) infection of pregnant women can cause fetal microcephaly and other neurologic defects. We describe the development of a non-human primate model to better understand fetal pathogenesis. To reliably induce fetal infection at defined times, four pregnant rhesus macaques are inoculated intravenously and intraamniotically with ZIKV at gestational day (GD) 41, 50, 64, or 90, corresponding to first and second trimester of gestation. The GD41-inoculated animal, experiencing fetal death 7 days later, has high virus levels in fetal and placental tissues, implicating ZIKV as cause of death. The other three fetuses are carried to near term and euthanized; while none display gross microcephaly, all show ZIKV RNA in many tissues, especially in the brain, which exhibits calcifications and reduced neural precursor cells. Given that this model consistently recapitulates neurologic defects of human congenital Zika syndrome, it is highly relevant to unravel determinants of fetal neuropathogenesis and to explore interventions.
Sala, Luca; van Meer, Berend J; Tertoolen, Leon G J; Bakkers, Jeroen; Bellin, Milena; Davis, Richard P; Denning, Chris; Dieben, Michel A E; Eschenhagen, Thomas; Giacomelli, Elisa; Grandela, Catarina; Hansen, Arne; Holman, Eduard R; Jongbloed, Monique R M; Kamel, Sarah M; Koopman, Charlotte D; Lachaud, Quentin; Mannhardt, Ingra; Mol, Mervyn P H; Mosqueira, Diogo; Orlova, Valeria V; Passier, Robert; Ribeiro, Marcelo C; Saleem, Umber; Smith, Godfrey L; Burton, Francis L; Mummery, Christine L
2018-02-02
There are several methods to measure cardiomyocyte and muscle contraction, but these require customized hardware, expensive apparatus, and advanced informatics or can only be used in single experimental models. Consequently, data and techniques have been difficult to reproduce across models and laboratories, analysis is time consuming, and only specialist researchers can quantify data. Here, we describe and validate an automated, open-source software tool (MUSCLEMOTION) adaptable for use with standard laboratory and clinical imaging equipment that enables quantitative analysis of normal cardiac contraction, disease phenotypes, and pharmacological responses. MUSCLEMOTION allowed rapid and easy measurement of movement from high-speed movies in (1) 1-dimensional in vitro models, such as isolated adult and human pluripotent stem cell-derived cardiomyocytes; (2) 2-dimensional in vitro models, such as beating cardiomyocyte monolayers or small clusters of human pluripotent stem cell-derived cardiomyocytes; (3) 3-dimensional multicellular in vitro or in vivo contractile tissues, such as cardiac "organoids," engineered heart tissues, and zebrafish and human hearts. MUSCLEMOTION was effective under different recording conditions (bright-field microscopy with simultaneous patch-clamp recording, phase contrast microscopy, and traction force microscopy). Outcomes were virtually identical to the current gold standards for contraction measurement, such as optical flow, post deflection, edge-detection systems, or manual analyses. Finally, we used the algorithm to quantify contraction in in vitro and in vivo arrhythmia models and to measure pharmacological responses. Using a single open-source method for processing video recordings, we obtained reliable pharmacological data and measures of cardiac disease phenotype in experimental cell, animal, and human models. © 2017 The Authors.
Postmortem time estimation using body temperature and a finite-element computer model.
den Hartog, Emiel A; Lotens, Wouter A
2004-09-01
In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a sufficiently reliable estimate of the PMT. In both cases a study was performed to verify the statements of suspects. For this purpose a finite-element computer model was developed that simulates a human torso and its clothing. With this model, changes to the body and the environment can also be modelled; this was very relevant in one of the cases, as the body had been in the presence of a small fire. In both cases it was possible to falsify the statements of the suspects by improving the accuracy of the PMT estimate. The estimated PMT in both cases was within the range of Henssge's model. The standard deviation of the PMT estimate was 35 min in the first case and 45 min in the second case, compared to 168 min (2.8 h) in Henssge's model. In conclusion, the model as presented here can have additional value for improving the accuracy of the PMT estimate. In contrast to the simple model of Henssge, the current model allows for increased accuracy when more detailed information is available. Moreover, the sensitivity of the predicted PMT for uncertainty in the circumstances can be studied, which is crucial to the confidence of the judge in the results.
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.
2013-01-01
While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.
De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C
2017-06-21
How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.
Úbeda, Yulán; Llorente, Miquel
2015-02-18
We evaluate a sanctuary chimpanzee sample (N = 11) using two adapted human assessment instruments: the Five-Factor Model (FFM) and Eysenck's Psychoticism-Extraversion-Neuroticism (PEN) model. The former has been widely used in studies of animal personality, whereas the latter has never been used to assess chimpanzees. We asked familiar keepers and scientists (N = 28) to rate 38 (FFM) and 12 (PEN) personality items. The personality surveys showed reliability in all of the items for both instruments. These were then analyzed in a principal component analysis and a regularized exploratory factor analysis, which revealed four and three components, respectively. The results indicate that both questionnaires show a clear factor structure, with characteristic factors not just for the species, but also for the sample type. However, due to its brevity, the PEN may be more suitable for assessing personality in a sanctuary, where employees do not have much time to devote to the evaluation process. In summary, both models are sensitive enough to evaluate the personality of a group of chimpanzees housed in a sanctuary.
Do Domestic Dogs Learn Words Based on Humans’ Referential Behaviour?
Tempelmann, Sebastian; Kaminski, Juliane; Tomasello, Michael
2014-01-01
Some domestic dogs learn to comprehend human words, although the nature and basis of this learning is unknown. In the studies presented here we investigated whether dogs learn words through an understanding of referential actions by humans rather than simple association. In three studies, each modelled on a study conducted with human infants, we confronted four word-experienced dogs with situations involving no spatial-temporal contiguity between the word and the referent; the only available cues were referential actions displaced in time from exposure to their referents. We found that no dogs were able to reliably link an object with a label based on social-pragmatic cues alone in all the tests. However, one dog did show skills in some tests, possibly indicating an ability to learn based on social-pragmatic cues. PMID:24646732
The Use of Empirical Data Sources in HRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce Hallbert; David Gertman; Julie Marble
This paper presents a review of available information related to human performance to support Human Reliability Analysis (HRA) performed for nuclear power plants (NPPs). A number of data sources are identified as potentially useful. These include NPP licensee event reports (LERs), augmented inspection team (AIT) reports, operator requalification data, results from the literature in experimental psychology, and the Aviation Safety Reporting System (ASRSs). The paper discusses how utilizing such information improves our capability to model and quantify human performance. In particular the paper discusses how information related to performance shaping factors (PSFs) can be extracted from empirical data to determinemore » their size effect, their relative effects, as well as their interactions. The paper concludes that appropriate use of existing sources can help addressing some of the important issues we are currently facing in HRA.« less
El Nino winners and losers declared
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, R.A.
Last spring human forecasters thought they saw signs of an imminent warming of the tropical Pacific, a classic El Nino, that could wreak havoc with weather around the globe. Researchers running computer models, on the other hand, saw a slight warming but not enough for an El Nino. The modelers were right. The season for El Ninos has ended and nothing happened. Since the models came online about 5 years ago, there have been two contests to predict El Ninos, which occur every 3 to 7 years, and the models have won both. The models are still experimental, but themore » general feeling is that they're indicating the right trends. The prospect of having reliable El Nino prediction models is good news beyond the small coterie of tropical Pacific specialists. Worldwide weather patterns are closely tied to El Nino cycles.« less
Li, Bing-Yan; Sun, Jing; Wei, Hong; Cheng, Yu-Zhi; Xue, Lian; Cheng, Zhi-Hai; Wan, Jian-Mei; Wang, Ai-Qing; Hei, Tom K.; Tong, Jian
2012-01-01
Radon and radon progeny inhalation exposure are recognized to induce lung cancer. To explore the role of mitochondria in radon-induced carcinogenesis in humans, an in vitro partially depleted mitochondrial DNA (mtDNA) cell line (ρ−) was generated by treatment of human bronchial epithelial (HBE) cells (ρ+) with ethidium bromide (EB). The characterization of ρ− cells indicated the presence of dysfunctional mitochondria and might thus serve a reliable model to investigate the role of mitochondria. In a gas inhalation chamber, ρ− and ρ+ cells were exposed to radon gas produced by a radium source. Results showed that apoptosis was significantly increased both in ρ− and ρ+ cells irradiated by radon. Moreover, apoptosis in ρ− cells showed a lower level than in ρ+ cells. Radon was further found to depress mitochondrial membrane potential (MMP) of HBE cells with knock-down mtDNA. Production of reactive oxygen species (ROS) was markedly elevated both in ρ− and ρ+ cells exposed to radon. The distribution of phases of cell cycle was different in ρ− compared to ρ+ cells. Radon-irradiation induced a rise in G2/M and decrease in S phase in ρ+ cells. In ρ− cells, G1, G2/M and S populations remained similar to cells exposed to radon. In conclusion, radon-induced changes in ROS generation, MMP and cell cycle are all attributed to reduction of apoptosis which may trigger and promote cell transformation leading to carcinogenesis. Our study indicates that the use of the ρ− knock-down mtDNA HBE cells may serve as a reliable model to study the role played by mitochondria in carcinogenic diseases. PMID:22891884
Identification of cardiac rhythm features by mathematical analysis of vector fields.
Fitzgerald, Tamara N; Brooks, Dana H; Triedman, John K
2005-01-01
Automated techniques for locating cardiac arrhythmia features are limited, and cardiologists generally rely on isochronal maps to infer patterns in the cardiac activation sequence during an ablation procedure. Velocity vector mapping has been proposed as an alternative method to study cardiac activation in both clinical and research environments. In addition to the visual cues that vector maps can provide, vector fields can be analyzed using mathematical operators such as the divergence and curl. In the current study, conduction features were extracted from velocity vector fields computed from cardiac mapping data. The divergence was used to locate ectopic foci and wavefront collisions, and the curl to identify central obstacles in reentrant circuits. Both operators were applied to simulated rhythms created from a two-dimensional cellular automaton model, to measured data from an in situ experimental canine model, and to complex three-dimensional human cardiac mapping data sets. Analysis of simulated vector fields indicated that the divergence is useful in identifying ectopic foci, with a relatively small number of vectors and with errors of up to 30 degrees in the angle measurements. The curl was useful for identifying central obstacles in reentrant circuits, and the number of velocity vectors needed increased as the rhythm became more complex. The divergence was able to accurately identify canine in situ pacing sites, areas of breakthrough activation, and wavefront collisions. In data from human arrhythmias, the divergence reliably estimated origins of electrical activity and wavefront collisions, but the curl was less reliable at locating central obstacles in reentrant circuits, possibly due to the retrospective nature of data collection. The results indicate that the curl and divergence operators applied to velocity vector maps have the potential to add valuable information in cardiac mapping and can be used to supplement human pattern recognition.
NASA Astrophysics Data System (ADS)
Bai, Ou; Lin, Peter; Vorbach, Sherry; Floeter, Mary Kay; Hattori, Noriaki; Hallett, Mark
2008-03-01
To explore the reliability of a high performance brain-computer interface (BCI) using non-invasive EEG signals associated with human natural motor behavior does not require extensive training. We propose a new BCI method, where users perform either sustaining or stopping a motor task with time locking to a predefined time window. Nine healthy volunteers, one stroke survivor with right-sided hemiparesis and one patient with amyotrophic lateral sclerosis (ALS) participated in this study. Subjects did not receive BCI training before participating in this study. We investigated tasks of both physical movement and motor imagery. The surface Laplacian derivation was used for enhancing EEG spatial resolution. A model-free threshold setting method was used for the classification of motor intentions. The performance of the proposed BCI was validated by an online sequential binary-cursor-control game for two-dimensional cursor movement. Event-related desynchronization and synchronization were observed when subjects sustained or stopped either motor execution or motor imagery. Feature analysis showed that EEG beta band activity over sensorimotor area provided the largest discrimination. With simple model-free classification of beta band EEG activity from a single electrode (with surface Laplacian derivation), the online classifications of the EEG activity with motor execution/motor imagery were: >90%/~80% for six healthy volunteers, >80%/~80% for the stroke patient and ~90%/~80% for the ALS patient. The EEG activities of the other three healthy volunteers were not classifiable. The sensorimotor beta rhythm of EEG associated with human natural motor behavior can be used for a reliable and high performance BCI for both healthy subjects and patients with neurological disorders. Significance: The proposed new non-invasive BCI method highlights a practical BCI for clinical applications, where the user does not require extensive training.
Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S
2011-01-01
By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000
Luo, Chunyuan; Tong, Min; Maxwell, Donald M; Saxena, Ashima
2008-09-25
Non-human primates are valuable animal models that are used for the evaluation of nerve agent toxicity as well as antidotes and results from animal experiments are extrapolated to humans. It has been demonstrated that the efficacy of an oxime primarily depends on its ability to reactivate nerve agent-inhibited acetylcholinesterase (AChE). If the in vitro oxime reactivation of nerve agent-inhibited animal AChE is similar to that of human AChE, it is likely that the results of an in vivo animal study will reliably extrapolate to humans. Therefore, the goal of this study was to compare the aging and reactivation of human and different monkey (Rhesus, Cynomolgus, and African Green) AChEs inhibited by GF, GD, and VR. The oximes examined include the traditional oxime 2-PAM, two H-oximes HI-6 and HLo-7, and the new candidate oxime MMB4. Results indicate that oxime reactivation of all three monkey AChEs was very similar to human AChE. The maximum difference in the second-order reactivation rate constant between human and three monkey AChEs or between AChEs from different monkey species was 5-fold. Aging rate constants of GF-, GD-, and VR-inhibited monkey AChEs were very similar to human AChE except for GF-inhibited monkey AChEs, which aged 2-3 times faster than the human enzyme. The results of this study suggest that all three monkey species are suitable animal models for nerve agent antidote evaluation since monkey AChEs possess similar biochemical/pharmacological properties to human AChE.
NASA Technical Reports Server (NTRS)
Washburn, David A.; Rumbaugh, Duane M.
1992-01-01
Nonhuman primates provide useful models for studying a variety of medical, biological, and behavioral topics. Four years of joystick-based automated testing of monkeys using the Language Research Center's Computerized Test System (LRC-CTS) are examined to derive hints and principles for comparable testing with other species - including humans. The results of multiple parametric studies are reviewed, and reliability data are presented to reveal the surprises and pitfalls associated with video-task testing of performance.
NASA Technical Reports Server (NTRS)
Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David
1990-01-01
This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.
NASA Astrophysics Data System (ADS)
Shirley, Rachel Elizabeth
Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. These models estimate how the effects of the scenario conditions are mediated by simulator bias, and demonstrate how to quantify the strength of the simulator bias. Third, development of a quantitative model of subjective PSFs based on objective data (plant parameters, alarms, etc.) and PSF values reported by student operators. The objective PSF model is based on the PSF network in the IDAC HRA method. The final model is a mixed effects Bayesian hierarchical linear regression model. The subjective PSF model includes three factors: The Environmental PSF, the simulator Bias, and the Context. The Environmental Bias is mediated by an operator sensitivity coefficient that captures the variation in operator reactions to plant conditions. The data collected in the pilot experiments are not expected to reflect professional NPP operator performance, because the students are still novice operators. However, the models used in this research and the methods developed to analyze them demonstrate how to consider simulator bias in experiment design and how to use simulator data to enhance the technical basis of a complex HRA method. The contributions of the research include a framework for discussing simulator bias, a quantitative method for estimating simulator bias, a method for obtaining operator-reported PSF values, and a quantitative method for incorporating the variability in operator perception into PSF models. The research demonstrates applications of Structural Equation Modeling and hierarchical Bayesian linear regression models in HRA. Finally, the research demonstrates the benefits of using student operators as a test platform for HRA research.
Zhu, Kathy Q; Engrav, Loren H; Armendariz, Rebecca; Muangman, Pornprom; Klein, Matthew B; Carrougher, Gretchen J; Deubner, Heike; Gibran, Nicole S
2005-02-01
Despite decades of research, our understanding of human hypertrophic scar is limited. A reliable animal model could significantly increase our understanding. We previously confirmed similarities between scarring in the female, red, Duroc pig and human hypertrophic scarring. The purpose of this study was to: (1) measure vascular endothelial growth factor (VEGF) and nitric oxide (NO) levels in wounds on the female Duroc; and (2) to compare the NO levels to those reported for human hypertrophic scar. Shallow and deep wounds were created on four female Durocs. VEGF levels were measured using ELISA and NO levels with the Griess reagent. VEGF and NO levels were increased in deep wounds at 10 days when compared to shallow wounds (p < 0.05). At 15 weeks, VEGF and NO levels had returned to the level of shallow wounds. At 21 weeks, VEGF and NO levels had declined below baseline levels in deep wounds and the NO levels were significantly lower (p < 0.01). We found that VEGF and NO exhibit two distinctly different temporal patterns in shallow and deep wounds on the female Durocs. Furthermore, NO is decreased in female, Duroc scar as it is in human, hypertrophic scar further validating the usefulness of the model.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
A novel animal model for hyperdynamic airway collapse.
Tsukada, Hisashi; O'Donnell, Carl R; Garland, Robert; Herth, Felix; Decamp, Malcolm; Ernst, Armin
2010-12-01
Tracheobronchomalacia (TBM) is increasingly recognized as a condition associated with significant pulmonary morbidity. However, treatment is invasive and complex, and because there is no appropriate animal model, novel diagnostic and treatment strategies are difficult to evaluate. We endeavored to develop a reliable airway model to simulate hyperdynamic airway collapse in humans. Seven 20-kg male sheep were enrolled in this study. Tracheomalacia was created by submucosal resection of > 50% of the circumference of 10 consecutive cervical tracheal cartilage rings through a midline cervical incision. A silicone stent was placed in the trachea to prevent airway collapse during recovery. Tracheal collapsibility was assessed at protocol-specific time points by bronchoscopy and multidetector CT imaging while temporarily removing the stent. Esophageal pressure and flow data were collected to assess flow limitation during spontaneous breathing. All animals tolerated the surgical procedure well and were stented without complications. One sheep died at 2 weeks because of respiratory failure related to stent migration. In all sheep, near-total forced inspiratory airway collapse was observed up to 3 months postprocedure. Esophageal manometry demonstrated flow limitation associated with large negative pleural pressure swings during rapid spontaneous inhalation. Hyperdynamic airway collapse can reliably be induced with this technique. It may serve as a model for evaluation of novel diagnostic and therapeutic strategies for TBM.
A Jones matrix formalism for simulating three-dimensional polarized light imaging of brain tissue.
Menzel, M; Michielsen, K; De Raedt, H; Reckfort, J; Amunts, K; Axer, M
2015-10-06
The neuroimaging technique three-dimensional polarized light imaging (3D-PLI) provides a high-resolution reconstruction of nerve fibres in human post-mortem brains. The orientations of the fibres are derived from birefringence measurements of histological brain sections assuming that the nerve fibres—consisting of an axon and a surrounding myelin sheath—are uniaxial birefringent and that the measured optic axis is oriented in the direction of the nerve fibres (macroscopic model). Although experimental studies support this assumption, the molecular structure of the myelin sheath suggests that the birefringence of a nerve fibre can be described more precisely by multiple optic axes oriented radially around the fibre axis (microscopic model). In this paper, we compare the use of the macroscopic and the microscopic model for simulating 3D-PLI by means of the Jones matrix formalism. The simulations show that the macroscopic model ensures a reliable estimation of the fibre orientations as long as the polarimeter does not resolve structures smaller than the diameter of single fibres. In the case of fibre bundles, polarimeters with even higher resolutions can be used without losing reliability. When taking the myelin density into account, the derived fibre orientations are considerably improved. © 2015 The Author(s).
Walsh, Susan; Lindenbergh, Alexander; Zuniga, Sofia B; Sijen, Titia; de Knijff, Peter; Kayser, Manfred; Ballantyne, Kaye N
2011-11-01
The IrisPlex system consists of a highly sensitive multiplex genotyping assay together with a statistical prediction model, providing users with the ability to predict blue and brown human eye colour from DNA samples with over 90% precision. This 'DNA intelligence' system is expected to aid police investigations by providing phenotypic information on unknown individuals when conventional DNA profiling is not informative. Falling within the new area of forensic DNA phenotyping, this paper describes the developmental validation of the IrisPlex assay following the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines for the application of DNA-based eye colour prediction to forensic casework. The IrisPlex assay produces complete SNP genotypes with only 31pg of DNA, approximately six human diploid cell equivalents, and is therefore more sensitive than commercial STR kits currently used in forensics. Species testing revealed human and primate specificity for a complete SNP profile. The assay is capable of producing accurate results from simulated casework samples such as blood, semen, saliva, hair, and trace DNA samples, including extremely low quantity samples. Due to its design, it can also produce full profiles with highly degraded samples often found in forensic casework. Concordance testing between three independent laboratories displayed reproducible results of consistent levels on varying types of simulated casework samples. With such high levels of sensitivity, specificity, consistency and reliability, this genotyping assay, as a core part of the IrisPlex system, operates in accordance with SWGDAM guidelines. Furthermore, as we demonstrated previously, the IrisPlex eye colour prediction system provides reliable results without the need for knowledge on the bio-geographic ancestry of the sample donor. Hence, the IrisPlex system, with its model-based prediction probability estimation of blue and brown human eye colour, represents a useful tool for immediate application in accredited forensic laboratories, to be used for forensic intelligence in tracing unknown individuals from crime scene samples. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
The biological component of the life support system for a Martian expedition.
Sychev, V N; Levinskikh, M A; Shepelev, Ye Ya
2003-01-01
Ground-based experiments at RF SSC-IBMP RAS (State Science Center of Russian Federation--Institute of Biomedical Problems of Russian Academia of Science) were aimed at overall studies of a human-unicellular algae-mineralization LSS (life support system) model. The system was 15 m3 in volume. It contained 45 L of algal suspension with a dry substance density of 10-12 g per liter; water volume, including the algal suspension, was 59 L. More sophisticated model systems with partial substitution of unicellular algae with higher plates (crop area of 15 m2) were tested in three experiments from 1.5 to 2 months in duration. The experiments demonstrated that LSS employing the unicellular algae play not only a macrofunction (regeneration of atmosphere and water) but also carry some other functions (purification of atmosphere, formation of microbial cenosis etc.) providing an adequate human environment. It is also important that functional reliability of the algal regenerative subsystem is secured by a huge number of cells able, in the event of death of a part of population, to recover in the shortest possible time the size of population and, hence, functionality of the LSS autotrophic component. For a long period of time a Martian crew will be detached from Earth's biosphere and for this reason LSS of their vehicle must be highly reliable, robust and redundant. One of the approaches to LSS redundancy is installation of two systems with different but equally efficient regeneration technologies, i.e. physical-chemical and biological. At best, these two systems should operate in parallel sharing the function of regeneration of the human environment. In case of failure or a sharp deterioration in performance of one system the other will, by way of redundancy, increase its throughput to make up for the loss. This LSS design will enable simultaneous handling of a number of critical problems including adequate satisfaction of human environmental needs. c2003 COSPAR. Published by Elsevier Science Ltd. All rights reserved.
Kulakovskiy, Ivan V; Vorontsov, Ilya E; Yevshin, Ivan S; Sharipov, Ruslan N; Fedorova, Alla D; Rumynskiy, Eugene I; Medvedeva, Yulia A; Magana-Mora, Arturo; Bajic, Vladimir B; Papatsenko, Dmitry A; Kolpakov, Fedor A; Makeev, Vsevolod J
2018-01-04
We present a major update of the HOCOMOCO collection that consists of patterns describing DNA binding specificities for human and mouse transcription factors. In this release, we profited from a nearly doubled volume of published in vivo experiments on transcription factor (TF) binding to expand the repertoire of binding models, replace low-quality models previously based on in vitro data only and cover more than a hundred TFs with previously unknown binding specificities. This was achieved by systematic motif discovery from more than five thousand ChIP-Seq experiments uniformly processed within the BioUML framework with several ChIP-Seq peak calling tools and aggregated in the GTRD database. HOCOMOCO v11 contains binding models for 453 mouse and 680 human transcription factors and includes 1302 mononucleotide and 576 dinucleotide position weight matrices, which describe primary binding preferences of each transcription factor and reliable alternative binding specificities. An interactive interface and bulk downloads are available on the web: http://hocomoco.autosome.ru and http://www.cbrc.kaust.edu.sa/hocomoco11. In this release, we complement HOCOMOCO by MoLoTool (Motif Location Toolbox, http://molotool.autosome.ru) that applies HOCOMOCO models for visualization of binding sites in short DNA sequences. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Gage, Peter; Wright, Michael J.
2017-01-01
Mars Sample Return is our Grand Challenge for the coming decade. TPS (Thermal Protection System) nominal performance is not the key challenge. The main difficulty for designers is the need to verify unprecedented reliability for the entry system: current guidelines for prevention of backward contamination require that the probability of spores larger than 1 micron diameter escaping into the Earth environment be lower than 1 million for the entire system, and the allocation to TPS would be more stringent than that. For reference, the reliability allocation for Orion TPS is closer to 11000, and the demonstrated reliability for previous human Earth return systems was closer to 1100. Improving reliability by more than 3 orders of magnitude is a grand challenge indeed. The TPS community must embrace the possibility of new architectures that are focused on reliability above thermal performance and mass efficiency. MSR (Mars Sample Return) EEV (Earth Entry Vehicle) will be hit with MMOD (Micrometeoroid and Orbital Debris) prior to reentry. A chute-less aero-shell design which allows for self-righting shape was baselined in prior MSR studies, with the assumption that a passive system will maximize EEV robustness. Hence the aero-shell along with the TPS has to take ground impact and not break apart. System verification will require testing to establish ablative performance and thermal failure but also testing of damage from MMOD, and structural performance at ground impact. Mission requirements will demand analysis, testing and verification that are focused on establishing reliability of the design. In this proposed talk, we will focus on the grand challenge of MSR EEV TPS and the need for innovative approaches to address challenges in modeling, testing, manufacturing and verification.
Soleimani, Mohammad Ali; Bahrami, Nasim; Yaghoobzadeh, Ameneh; Banihashemi, Hedieh; Nia, Hamid Sharif; Haghdoost, Ali Akbar
2016-01-01
Due to increasing recognition of the importance of death anxiety for understanding human nature, it is important that researchers who investigate death anxiety have reliable and valid methodology to measure. The purpose of this study was to evaluate the validity and reliability of the Persian version of Templer Death Anxiety Scale (TDAS) in family caregivers of cancer patients. A sample of 326 caregivers of cancer patients completed a 15-item questionnaire. Principal components analysis (PCA) followed by a varimax rotation was used to assess factor structure of the DAS. The construct validity of the scale was assessed using exploratory and confirmatory factor analyses. Convergent and discriminant validity were also examined. Reliability was assessed with Cronbach's alpha coefficients and construction reliability. Based on the results of the PCA and consideration of the meaning of our items, a three-factor solution, explaining 60.38% of the variance, was identified. A confirmatory factor analysis (CFA) then supported the adequacy of the three-domain structure of the DAS. Goodness-of-fit indices showed an acceptable fit overall with the full model {χ(2)(df) = 262.32 (61), χ(2)/df = 2.04 [adjusted goodness of fit index (AGFI) = 0.922, parsimonious comparative fit index (PCFI) = 0.703, normed fit Index (NFI) = 0.912, CMIN/DF = 2.048, root mean square error of approximation (RMSEA) = 0.055]}. Convergent and discriminant validity were shown with construct fulfilled. The Cronbach's alpha and construct reliability were greater than 0.70. The findings show that the Persian version of the TDAS has a three-factor structure and acceptable validity and reliability.
Reliable and energy-efficient communications for wireless biomedical implant systems.
Ntouni, Georgia D; Lioumpas, Athanasios S; Nikita, Konstantina S
2014-11-01
Implant devices are used to measure biological parameters and transmit their results to remote off-body devices. As implants are characterized by strict requirements on size, reliability, and power consumption, applying the concept of cooperative communications to wireless body area networks offers several benefits. In this paper, we aim to minimize the power consumption of the implant device by utilizing on-body wearable devices, while providing the necessary reliability in terms of outage probability and bit error rate. Taking into account realistic power considerations and wireless propagation environments based on the IEEE P802.l5 channel model, an exact theoretical analysis is conducted for evaluating several communication scenarios with respect to the position of the wearable device and the motion of the human body. The derived closed-form expressions are employed toward minimizing the required transmission power, subject to a minimum quality-of-service requirement. In this way, the complexity and power consumption are transferred from the implant device to the on-body relay, which is an efficient approach since they can be easily replaced, in contrast to the in-body implants.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
NASA Astrophysics Data System (ADS)
Korre, Anna; Manzoor, Saba; Simperler, Alexandra
2015-04-01
Post combustion CO2 capture (PCCC) technology in power plants using amines as solvent for CO2 capture, is one of the reduction technologies employed to combat escalating levels of CO2 in the atmosphere. However, amine solvents used for capturing CO2 produce negative emissions such as, nitrosamines and nitramines, which are suspected to be potent carcinogens. It is therefore essential to assess the atmospheric fate of these amine emissions in the atmosphere by studying their atmospheric chemistry, dispersion and transport pathways away from the source and deposition in the environment, so as to be able to assess accurately the risk posed to human health and the natural environment. An important knowledge gap until recently has been the consideration of the atmospheric chemistry of these amine emissions simultaneously with dispersion and deposition studies so as to perform reliable human health and environmental risk assessments. The authors have developed a methodology to assess the distribution of such emissions away from a post-combustion facility by studying the atmospheric chemistry of monoethanolamine, the most commonly used solvent for CO2 capture, and those of the resulting degradation amines, methylamine and dimethylamine. This was coupled with dispersion modeling calculations (Manzoor, et al., 2014; Manzoor et al,2015). Rate coefficients describing the entire atmospheric chemistry schemes of the amines studied were evaluated employing quantum chemical theoretical and kinetic modeling calculations. These coefficients were used to solve the advection-dispersion-chemical equation using an atmospheric dispersion model, ADMS 5. This methodology is applicable to any size of a power plant and at any geographical location. In this paper, the humman health risk assessment is integrated in the modelling study. The methodology is demonstrated on a case study on the UK's largest capture pilot plant, Ferrybridge CCPilot 100+, to estimate the dispersion, chemical transformation and transport pathways of the amines and their degradation products away from the emitting facilities for the worst case scenario. The obtained results are used in calculating the cancer risks centred on oral cancer slope factor (CSF), risk-specific dose (RSD) and tolerant risk level of these chemical discharges. According to the CSF and RSD relationship (WQSA, 2011), at high CSF the RSD is small i.e. resulting in a high potent carcinogen risk. The health risk assessment is performed by following the US EPA method (USEPA, 1992) which considers atmospheric concentrations of these pollutants (mg m-3, evaluated by the dispersion model), daily intake through inhalation (mg kg-1 d-1), inhalation rate (m3 d-1), body weight (kg), average time (d), exposure time (d), exposure frequency (d), absorption factor and retention factor. Deterministic and probabilistic risk estimation of human health risks caused by exposure to these chemical pollutant discharges are conducted as well. From the findings of this study, it is suggested that the developed methodology is reliable in determining the risk these amine emissions from PCCC technology pose to human health. With this reliable and a universal approach it is possible to assess the fate of the amine emissions which remains a key area to address for the large scale CCS implementation.
A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Edmonston, Leon P.; Randall, Robert S.
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
Non-Traditional Displays for Mission Monitoring
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Schutte, Paul C.
1999-01-01
Advances in automation capability and reliability have changed the role of humans from operating and controlling processes to simply monitoring them for anomalies. However, humans are traditionally bad monitors of highly reliable systems over time. Thus, the human is assigned a task for which he is ill equipped. We believe that this has led to the dominance of human error in process control activities such as operating transportation systems (aircraft and trains), monitoring patient health in the medical industry, and controlling plant operations. Research has shown, though, that an automated monitor can assist humans in recognizing and dealing with failures. One possible solution to this predicament is to use a polar-star display that will show deviations from normal states based on parameters that are most indicative of mission health.