Sample records for fundamental attribution error

  1. Attributional Errors and Gender Stereotypes: Perceptions of Male and Female Experts on Sex-Typed Material

    ERIC Educational Resources Information Center

    Peturson, Elizabeth D.; Cramer, Kenneth M.; Pomerleau, Chantal M.

    2011-01-01

    Observers frequently commit the fundamental attribution error by failing to make adequate allowance for contextual influences in favour of dispositional explanations. The present experiment tested whether people would attribute a quizmaster's knowledge of the quiz topic to personal factors (personally knowing the answers) or to situational factors…

  2. Spontaneous mentalizing predicts the fundamental attribution error.

    PubMed

    Moran, Joseph M; Jolly, Eshin; Mitchell, Jason P

    2014-03-01

    When explaining the reasons for others' behavior, perceivers often overemphasize underlying dispositions and personality traits over the power of the situation, a tendency known as the fundamental attribution error. One possibility is that this bias results from the spontaneous processing of others' mental states, such as their momentary feelings or more enduring personality characteristics. Here, we use fMRI to test this hypothesis. Participants read a series of stories that described a target's ambiguous behavior in response to a specific social situation and later judged whether that act was attributable to the target's internal dispositions or to external situational factors. Neural regions consistently associated with mental state inference-especially, the medial pFC-strongly predicted whether participants later made dispositional attributions. These results suggest that the spontaneous engagement of mentalizing may underlie the biased tendency to attribute behavior to dispositional over situational forces.

  3. Broadening our understanding of clinical quality: from attribution error to situated cognition.

    PubMed

    Artino, A R; Durning, S J; Waechter, D M; Leary, K L; Gilliland, W R

    2012-02-01

    The tendency to overestimate the influence of personal characteristics on outcomes, and to underestimate the influence of situational factors, is known as the fundamental attribution error. We argue that medical-education researchers and policy makers may be guilty of this error in their quest to understand clinical quality. We suggest that to truly understand clinical quality, they must examine situational factors, which often have a strong influence on the quality of clinical encounters.

  4. Importance of understanding landscape biases in USGS gage locations: Implications and solutions for managers

    USGS Publications Warehouse

    Wagner, Tyler; DeWeber, Jefferson Tyrell; Tsang, Yin-Phan; Krueger, Damon; Whittier, Joanna B.; Infante, Dana M.; Whelan, Gary

    2014-01-01

    Flow and water temperature are fundamental properties of stream ecosystems upon which many freshwater resource management decisions are based. U.S. Geological Survey (USGS) gages are the most important source of streamflow and water temperature data available nationwide, but the degree to which gages represent landscape attributes of the larger population of streams has not been thoroughly evaluated. We identified substantial biases for seven landscape attributes in one or more regions across the conterminous United States. Streams with small watersheds (<10 km2) and at high elevations were often underrepresented, and biases were greater for water temperature gages and in arid regions. Biases can fundamentally alter management decisions and at a minimum this potential for error must be acknowledged accurately and transparently. We highlight three strategies that seek to reduce bias or limit errors arising from bias and illustrate how one strategy, supplementing USGS data, can greatly reduce bias.

  5. Stressing The Person: Legal and Everyday Person Attributions Under Stress

    PubMed Central

    Kubota, Jennifer T.; Mojdehbakhsh, Rachel; Raio, Candace; Brosch, Tobias; Uleman, Jim S.; Phelps, Elizabeth A.

    2014-01-01

    When determining the cause of a person’s behavior, perceivers often overweigh dispositional explanations and underweigh situational explanations, an error known as the Fundamental Attribution Error (FAE). The FAE occurs in part because dispositional explanations are relatively automatic, whereas considering the situation requires additional cognitive effort. Stress is known to impair the prefrontal cortex and executive functions important for the attribution process. We investigated if stress increases dispositional attributions in common place and legal situations. Experiencing a physiological stressor increased participants’ cortisol, dispositional attributions of common everyday behaviors, and negative evaluations. When determining whether a crime was due to the defendant’s disposition or the mitigating situation, self-reported stress correlated with increased dispositional judgments of defendant’s behavior. These findings indicate that stress may makes people more likely to commit the FAE and less favorable in their evaluations of others both in daily life and when making socially consequential judicial decisions. PMID:25175000

  6. Bias in the Counseling Process: How to Recognize and Avoid It.

    ERIC Educational Resources Information Center

    Morrow, Kelly A.; Deidan, Cecilia T.

    1992-01-01

    Notes that counselors' vulnerability to inferential bias during counseling process may result in misdiagnosis and improper interventions. Discusses these inferential biases: availability and representativeness heuristics; fundamental attribution error; anchoring, prior knowledge, and labeling; confirmatory hypothesis testing; and reconstructive…

  7. Use of attribute association error probability estimates to evaluate quality of medical record geocodes.

    PubMed

    Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L

    2015-09-15

    The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the assessment of uncertainty in patient geocodes, relative to existing geocoding related uncertainty metrics.

  8. Eliminating the Blame Game

    ERIC Educational Resources Information Center

    Swanson, Kristen; Allen, Gayle; Mancabelli, Rob

    2015-01-01

    Even mentioning data analysis puts many educators on edge; they fear that in data discussions, their performance will be judged. And, the authors note, it's a human trait to look for the source of a problem in the behavior of people involved rather than the system surrounding those people--what some call the Fundamental Attribution Error. When…

  9. Bad Apples or Sour Pickles? Fundamental Attribution Error and the Columbine Massacre. The Cutting Edge

    ERIC Educational Resources Information Center

    Clabaugh, Gary K.; Clabaugh, Alison A.

    2005-01-01

    A painstaking investigative report by the Washington Post describes pre-massacre Columbine as filled with social vinegar. The high school was dominated by a "cult of the athlete." In this distorted environment, a coterie of favored jocks, who wore white hats to set themselves apart, consistently bullied, hazed, and sexually harassed their…

  10. Digital hum filtering

    USGS Publications Warehouse

    Knapp, R.W.; Anderson, N.L.

    1994-01-01

    Data may be overprinted by a steady-state cyclical noise (hum). Steady-state indicates that the noise is invariant with time; its attributes, frequency, amplitude, and phase, do not change with time. Hum recorded on seismic data usually is powerline noise and associated higher harmonics; leakage from full-waveform rectified cathodic protection devices that contain the odd higher harmonics of powerline frequencies; or vibrational noise from mechanical devices. The fundamental frequency of powerline hum may be removed during data acquisition with the use of notch filters. Unfortunately, notch filters do not discriminate signal and noise, attenuating both. They also distort adjacent frequencies by phase shifting. Finally, they attenuate only the fundamental mode of the powerline noise; higher harmonics and frequencies other than that of powerlines are not removed. Digital notch filters, applied during processing, have many of the same problems as analog filters applied in the field. The method described here removes hum of a particular frequency. Hum attributes are measured by discrete Fourier analysis, and the hum is canceled from the data by subtraction. Errors are slight and the result of the presence of (random) noise in the window or asynchrony of the hum and data sampling. Error is minimized by increasing window size or by resampling to a finer interval. Errors affect the degree of hum attenuation, not the signal. The residual is steady-state hum of the same frequency. ?? 1994.

  11. Fundamental(ist) attribution error: Protestants are dispositionally focused.

    PubMed

    Li, Yexin Jessica; Johnson, Kathryn A; Cohen, Adam B; Williams, Melissa J; Knowles, Eric D; Chen, Zhansheng

    2012-02-01

    Attribution theory has long enjoyed a prominent role in social psychological research, yet religious influences on attribution have not been well studied. We theorized and tested the hypothesis that Protestants would endorse internal attributions to a greater extent than would Catholics, because Protestantism focuses on the inward condition of the soul. In Study 1, Protestants made more internal, but not external, attributions than did Catholics. This effect survived controlling for Protestant work ethic, need for structure, and intrinsic and extrinsic religiosity. Study 2 showed that the Protestant-Catholic difference in internal attributions was significantly mediated by Protestants' greater belief in a soul. In Study 3, priming religion increased belief in a soul for Protestants but not for Catholics. Finally, Study 4 found that experimentally strengthening belief in a soul increased dispositional attributions among Protestants but did not change situational attributions. These studies expand the understanding of cultural differences in attributions by demonstrating a distinct effect of religion on dispositional attributions.

  12. Agent-specific learning signals for self–other distinction during mentalising

    PubMed Central

    Dolan, Raymond J.; Kurth-Nelson, Zeb

    2018-01-01

    Humans have a remarkable ability to simulate the minds of others. How the brain distinguishes between mental states attributed to self and mental states attributed to someone else is unknown. Here, we investigated how fundamental neural learning signals are selectively attributed to different agents. Specifically, we asked whether learning signals are encoded in agent-specific neural patterns or whether a self–other distinction depends on encoding agent identity separately from this learning signal. To examine this, we tasked subjects to learn continuously 2 models of the same environment, such that one was selectively attributed to self and the other was selectively attributed to another agent. Combining computational modelling with magnetoencephalography (MEG) enabled us to track neural representations of prediction errors (PEs) and beliefs attributed to self, and of simulated PEs and beliefs attributed to another agent. We found that the representational pattern of a PE reliably predicts the identity of the agent to whom the signal is attributed, consistent with a neural self–other distinction implemented via agent-specific learning signals. Strikingly, subjects exhibiting a weaker neural self–other distinction also had a reduced behavioural capacity for self–other distinction and displayed more marked subclinical psychopathological traits. The neural self–other distinction was also modulated by social context, evidenced in a significantly reduced decoding of agent identity in a nonsocial control task. Thus, we show that self–other distinction is realised through an encoding of agent identity intrinsic to fundamental learning signals. The observation that the fidelity of this encoding predicts psychopathological traits is of interest as a potential neurocomputational psychiatric biomarker. PMID:29689053

  13. Agent-specific learning signals for self-other distinction during mentalising.

    PubMed

    Ereira, Sam; Dolan, Raymond J; Kurth-Nelson, Zeb

    2018-04-01

    Humans have a remarkable ability to simulate the minds of others. How the brain distinguishes between mental states attributed to self and mental states attributed to someone else is unknown. Here, we investigated how fundamental neural learning signals are selectively attributed to different agents. Specifically, we asked whether learning signals are encoded in agent-specific neural patterns or whether a self-other distinction depends on encoding agent identity separately from this learning signal. To examine this, we tasked subjects to learn continuously 2 models of the same environment, such that one was selectively attributed to self and the other was selectively attributed to another agent. Combining computational modelling with magnetoencephalography (MEG) enabled us to track neural representations of prediction errors (PEs) and beliefs attributed to self, and of simulated PEs and beliefs attributed to another agent. We found that the representational pattern of a PE reliably predicts the identity of the agent to whom the signal is attributed, consistent with a neural self-other distinction implemented via agent-specific learning signals. Strikingly, subjects exhibiting a weaker neural self-other distinction also had a reduced behavioural capacity for self-other distinction and displayed more marked subclinical psychopathological traits. The neural self-other distinction was also modulated by social context, evidenced in a significantly reduced decoding of agent identity in a nonsocial control task. Thus, we show that self-other distinction is realised through an encoding of agent identity intrinsic to fundamental learning signals. The observation that the fidelity of this encoding predicts psychopathological traits is of interest as a potential neurocomputational psychiatric biomarker.

  14. Persistent dispositionalism in interactionist clothing: fundamental attribution error in explaining prison abuse.

    PubMed

    Haney, Craig; Zimbardo, Philip G

    2009-06-01

    The Stanford Prison Experiment demonstrated some important lessons about the power of social situations, settings, and structures to shape and transform behavior. At the time the study was done, the authors scrupulously addressed the issue of whether and how the dispositions or personality traits of the participants might have affected the results. Here the authors renew and reaffirm their original interpretation of the results and apply this perspective to some recent socially and politically significant events.

  15. An Agent-Based Intervention to Assist Drivers Under Stereotype Threat: Effects of In-Vehicle Agents' Attributional Error Feedback.

    PubMed

    Joo, Yeon Kyoung; Lee-Won, Roselyn J

    2016-10-01

    For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.

  16. Applying lessons from social psychology to transform the culture of error disclosure.

    PubMed

    Han, Jason; LaMarra, Denise; Vapiwala, Neha

    2017-10-01

    The ability to carry out prompt and effective error disclosure has been described in the literature as an essential skill among physicians that can lead to improved patient satisfaction, staff well-being and hospital outcomes. However, few studies have addressed the social psychology principles that may influence physician behaviour. The authors provide an overview of recent administrative measures designed to encourage physicians to disclose error, but note that deliberate practice, buttressed with lessons from social psychology, is needed to implement further productive behavioural changes. Two main cognitive biases that may hinder error disclosure are identified, namely: fundamental attribution error, and forecasting error. Strategies to overcome these maladaptive cognitive patterns are discussed. The authors note that interactions with standardised patients (SPs) can be used to simulate hospital encounters and help teach important behavioural considerations. Virtual reality is introduced as an immersive, realistic and easily scalable technology that can supplement traditional curricula. Lastly, the authors highlight the importance of establishing a professional standard of competence, potentially by incorporating difficult patient encounters, including disclosure of error, into medical licensing examinations that assess clinical skills. Existing curricula that cover physician error disclosure may benefit from reviewing the social psychology literature. These lessons, incorporated into SP programmes and emerging technological platforms, may improve training and evaluative methods for all medical trainees. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  17. An accurate ab initio quartic force field for ammonia

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.; Lee, Timothy J.; Taylor, Peter R.

    1992-01-01

    The quartic force field of ammonia is computed using basis sets of spdf/spd and spdfg/spdf quality and an augmented coupled cluster method. After correcting for Fermi resonance, the computed fundamentals and nu 4 overtones agree on average to better than 3/cm with the experimental ones except for nu 2. The discrepancy for nu 2 is principally due to higher-order anharmonicity effects. The computed omega 1, omega 3, and omega 4 confirm the recent experimental determination by Lehmann and Coy (1988) but are associated with smaller error bars. The discrepancy between the computed and experimental omega 2 is far outside the expected error range, which is also attributed to higher-order anharmonicity effects not accounted for in the experimental determination. Spectroscopic constants are predicted for a number of symmetric and asymmetric top isotopomers of NH3.

  18. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  19. Producing good font attribute determination using error-prone information

    NASA Astrophysics Data System (ADS)

    Cooperman, Robert

    1997-04-01

    A method to provide estimates of font attributes in an OCR system, using detectors of individual attributes that are error-prone. For an OCR system to preserve the appearance of a scanned document, it needs accurate detection of font attributes. However, OCR environments have noise and other sources of errors, tending to make font attribute detection unreliable. Certain assumptions about font use can greatly enhance accuracy. Attributes such as boldness and italics are more likely to change between neighboring words, while attributes such as serifness are less likely to change within the same paragraph. Furthermore, the document as a whole, tends to have a limited number of sets of font attributes. These assumptions allow a better use of context than the raw data, or what would be achieved by simpler methods that would oversmooth the data.

  20. Addition and subtraction by students with Down syndrome

    NASA Astrophysics Data System (ADS)

    Noda Herrera, Aurelia; Bruno, Alicia; González, Carina; Moreno, Lorenzo; Sanabria, Hilda

    2011-01-01

    We present a research report on addition and subtraction conducted with Down syndrome students between the ages of 12 and 31. We interviewed a group of students with Down syndrome who executed algorithms and solved problems using specific materials and paper and pencil. The results show that students with Down syndrome progress through the same procedural levels as those without disabilities though they have difficulties in reaching the most abstract level (numerical facts). The use of fingers or concrete representations (balls) appears as a fundamental process among these students. As for errors, these vary widely depending on the students, and can be attributed mostly to an incomplete knowledge of the decimal number system.

  1. Reader error, object recognition, and visual search

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  2. Relationship between Attributional Errors and At-Risk Behaviors among Juvenile Delinquents.

    ERIC Educational Resources Information Center

    Daley, Christine E.; Onwuegbuzie, Anthony J.

    The purpose of this study was to determine whether at-risk behaviors (e.g., substance abuse, gun ownership, sexual activity, and gang membership) are associated with violence attribution errors, as measured by Daley and Onwuegbuzie's (1995) Violence Attribution Survey, among 82 incarcerated male juvenile delinquents. Analysis revealed that the…

  3. The Social Explanatory Styles Questionnaire: Assessing Moderators of Basic Social-Cognitive Phenomena Including Spontaneous Trait Inference, the Fundamental Attribution Error, and Moral Blame

    PubMed Central

    Gill, Michael J.; Andreychik, Michael R.

    2014-01-01

    Why is he poor? Why is she failing academically? Why is he so generous? Why is she so conscientious? Answers to such everyday questions—social explanations—have powerful effects on relationships at the interpersonal and societal levels. How do people select an explanation in particular cases? We suggest that, often, explanations are selected based on the individual's pre-existing general theories of social causality. More specifically, we suggest that over time individuals develop general beliefs regarding the causes of social events. We refer to these beliefs as social explanatory styles. Our goal in the present article is to offer and validate a measure of individual differences in social explanatory styles. Accordingly, we offer the Social Explanatory Styles Questionnaire (SESQ), which measures three independent dimensions of social explanatory style: Dispositionism, historicism, and controllability. Studies 1–3 examine basic psychometric properties of the SESQ and provide positive evidence regarding internal consistency, factor structure, and both convergent and divergent validity. Studies 4–6 examine predictive validity for each subscale: Does each explanatory dimension moderate an important phenomenon of social cognition? Results suggest that they do. In Study 4, we show that SESQ dispositionism moderates the tendency to make spontaneous trait inferences. In Study 5, we show that SESQ historicism moderates the tendency to commit the Fundamental Attribution Error. Finally, in Study 6 we show that SESQ controllability predicts polarization of moral blame judgments: Heightened blaming toward controllable stigmas (assimilation), and attenuated blaming toward uncontrollable stigmas (contrast). Decades of research suggest that explanatory style regarding the self is a powerful predictor of self-functioning. We think it is likely that social explanatory styles—perhaps comprising interactive combinations of the basic dimensions tapped by the SESQ—will be similarly potent predictors of social functioning. We hope the SESQ will be a useful tool for exploring that possibility. PMID:25007152

  4. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    NASA Technical Reports Server (NTRS)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-01-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  5. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillman, Benjamin R.; Marchand, Roger T.; Ackerman, Thomas P.

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4more » km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.« less

  6. Misinterpretations of the Second Fundamental Theorem of Welfare Economics: Barriers to Better Economic Education.

    ERIC Educational Resources Information Center

    Bryant, William D. A.

    1994-01-01

    Asserts that errors frequently are made in teaching about the second fundamental theorem of welfare economics. Describes how this issue usually is taught in undergraduate economics courses. Discusses how this interpretation contains errors and may hinder students' analysis of public policy regarding welfare systems. (CFR)

  7. Investigating industrial investigation: examining the impact of a priori knowledge and tunnel vision education.

    PubMed

    Maclean, Carla L; Brimacombe, C A Elizabeth; Lindsay, D Stephen

    2013-12-01

    The current study addressed tunnel vision in industrial incident investigation by experimentally testing how a priori information and a human bias (generated via the fundamental attribution error or correspondence bias) affected participants' investigative behavior as well as the effectiveness of a debiasing intervention. Undergraduates and professional investigators engaged in a simulated industrial investigation exercise. We found that participants' judgments were biased by knowledge about the safety history of either a worker or piece of equipment and that a human bias was evident in participants' decision making. However, bias was successfully reduced with "tunnel vision education." Professional investigators demonstrated a greater sophistication in their investigative decision making compared to undergraduates. The similarities and differences between these two populations are discussed. (c) 2013 APA, all rights reserved

  8. Reduction of Errors during Practice Facilitates Fundamental Movement Skill Learning in Children with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Capio, C. M.; Poolton, J. M.; Sit, C. H. P.; Eguia, K. F.; Masters, R. S. W.

    2013-01-01

    Background: Children with intellectual disabilities (ID) have been found to have inferior motor proficiencies in fundamental movement skills (FMS). This study examined the effects of training the FMS of overhand throwing by manipulating the amount of practice errors. Methods: Participants included 39 children with ID aged 4-11 years who were…

  9. Peripheral Quantitative Computed Tomography: Measurement Sensitivity in Persons With and Without Spinal Cord Injury

    PubMed Central

    Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.

    2012-01-01

    Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249

  10. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    PubMed

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  11. Toward isolating the role of dopamine in the acquisition of incentive salience attribution.

    PubMed

    Chow, Jonathan J; Nickell, Justin R; Darna, Mahesh; Beckmann, Joshua S

    2016-10-01

    Stimulus-reward learning has been heavily linked to the reward-prediction error learning hypothesis and dopaminergic function. However, some evidence suggests dopaminergic function may not strictly underlie reward-prediction error learning, but may be specific to incentive salience attribution. Utilizing a Pavlovian conditioned approach procedure consisting of two stimuli that were equally reward-predictive (both undergoing reward-prediction error learning) but functionally distinct in regard to incentive salience (levers that elicited sign-tracking and tones that elicited goal-tracking), we tested the differential role of D1 and D2 dopamine receptors and nucleus accumbens dopamine in the acquisition of sign- and goal-tracking behavior and their associated conditioned reinforcing value within individuals. Overall, the results revealed that both D1 and D2 inhibition disrupted performance of sign- and goal-tracking. However, D1 inhibition specifically prevented the acquisition of sign-tracking to a lever, instead promoting goal-tracking and decreasing its conditioned reinforcing value, while neither D1 nor D2 signaling was required for goal-tracking in response to a tone. Likewise, nucleus accumbens dopaminergic lesions disrupted acquisition of sign-tracking to a lever, while leaving goal-tracking in response to a tone unaffected. Collectively, these results are the first evidence of an intraindividual dissociation of dopaminergic function in incentive salience attribution from reward-prediction error learning, indicating that incentive salience, reward-prediction error, and their associated dopaminergic signaling exist within individuals and are stimulus-specific. Thus, individual differences in incentive salience attribution may be reflective of a differential balance in dopaminergic function that may bias toward the attribution of incentive salience, relative to reward-prediction error learning only. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Parents' versus physicians' values for clinical outcomes in young febrile children.

    PubMed

    Kramer, M S; Etezadi-Amoli, J; Ciampi, A; Tange, S M; Drummond, K N; Mills, E L; Bernstein, M L; Leduc, D G

    1994-05-01

    To compare how parents and physicians value potential clinical outcomes in young children who have a fever but no focus of bacterial infection. Cross-sectional study of 100 parents of well children aged 3 to 24 months, 61 parents of febrile children aged 3 to 24 months, and 56 attending staff physicians working in a children's hospital emergency department. A pretested visual analog scale was used to assess values on a 0-to-1 scale (where 0 is the value of the worst possible outcome, and 1 is the value for the best) for 22 scenarios, grouped in three categories according to severity. Based on the three or four common attributes comprising the scenarios in a given group, each respondent's value function was estimated statistically based on multiattribute utility theory. For outcomes in group 1 (rapidly resolving viral infection with one or more diagnostic tests), no significant group differences were observed. For outcomes in groups 2 (acute infections without long-term sequelae) and 3 (long-term sequelae of urinary tract infection or bacterial meningitis), parents of well children and parents of febrile children had values that were similar to each other but significantly lower than physicians' values for pneumonia with delayed diagnosis, false-positive diagnosis of urinary tract infection, viral meningitis, and unilateral hearing loss. For bacterial meningitis with or without delay, however, the reverse pattern was observed; physicians' values were lower than parents'. In arriving at their judgment for group 2 and 3 scenarios, parents gave significantly greater weight to attributes involving the pain and discomfort of diagnostic tests and to diagnostic error, whereas physicians gave significantly greater weight to attributes involving both short- and long-term morbidity and long-term worry and inconvenience. Parents were significantly more likely to be risk-seeking in the way they weighted the attributes comprising group 2 and 3 scenarios than physicians, ie, they were more willing to risk rare but severe morbidity to avoid the short-term adverse effects of testing. Parents and physicians show fundamental value differences concerning diagnostic testing, diagnostic error, and short- and long-term morbidity; these differences have important implications for diagnostic decision making in the young febrile child.

  13. Determining the importance of fundamental hearing aid attributes.

    PubMed

    Meister, Hartmut; Lausberg, Isabel; Kiessling, Juergen; Walger, Martin; von Wedel, Hasso

    2002-07-01

    To determine the importance of fundamental hearing aid attributes and to elicit measures of satisfaction and dissatisfaction. A prospective study based on a survey using a decompositional approach of preference measurement (conjoint analysis). Ear, nose, and throat university hospitals in Cologne and Giessen; various branches of hearing aid dispensers. A random sample of 175 experienced hearing aid users aged 20 to 91 years (mean age, 61 yr) recruited at two different sites. Relative importance of different hearing aid attributes, satisfaction and dissatisfaction with hearing aid attributes. Of the six fundamental hearing aid attributes assessed by the hearing aid users, the two features concerning speech perception attained the highest relative importance (25% speech in quiet, 27% speech in noise). The remaining four attributes (sound quality, handling, feedback, localization) had significantly lower values in a narrow range of 10 to 12%. Comparison of different subgroups of hearing aid wearers based on sociodemographic and user-specific data revealed a large interindividual scatter of the preferences for the attributes. A similar examination with 25 clinicians revealed overestimation of the importance of the attributes commonly associated with problems. Moreover, examination of satisfaction showed that speech in noise was the most frequent source of dissatisfaction (30% of all statements), whereas the subjects were satisfied with speech in quiet. The results emphasize the high importance of attributes related to speech perception. Speech discrimination in noise was the most important but also the most frequent source of negative statements. This attribute will be the outstanding parameter of future developments. Appropriate handling becomes an important factor for elderly subjects. However, because of the large interindividual scatter of data, the preferences of different hearing aid users were hardly predictable, giving evidence of multifactorial influences.

  14. Concepts of Life in the Contexts of Mars

    NASA Technical Reports Server (NTRS)

    Des Marais, D. J.

    2014-01-01

    The search for habitable environments and life requires a working concept of life's fundamental attributes. This concept helps to identify the "services" that an environment must provide to sustain life. We must consider the possibility that extraterrestrial life might differ fundamentally from our own, but it is still worthwhile to begin by hypothesizing attributes of life that might be universal versus ones that reflect local solutions to survival on Earth.

  15. Verifying and Postprocesing the Ensemble Spread-Error Relationship

    NASA Astrophysics Data System (ADS)

    Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli

    2013-04-01

    With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.

  16. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  17. The Role of Perceptual Similarity, Context, and Situation When Selecting Attributes: Considerations Made by 5-6-Year-Olds in Data Modeling Environments

    ERIC Educational Resources Information Center

    Leavy, Aisling; Hourigan, Mairead

    2018-01-01

    Classroom data modeling involves posing questions, identifying attributes of phenomena, measuring and structuring these attributes, and then composing, revising, and communicating the outcomes. Selecting attributes is a fundamental component of data modeling, and the considerations made when selecting attributes is the focus of this paper. A…

  18. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    NASA Astrophysics Data System (ADS)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  19. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  20. Contribution of stimulus attributes to errors in duration and distance judgments--a developmental study.

    PubMed

    Matsuda, F; Lan, W C; Tanimura, R

    1999-02-01

    In Matsuda's 1996 study, 4- to 11-yr.-old children (N = 133) watched two cars running on two parallel tracks on a CRT display and judged whether their durations and distances were equal and, if not, which was larger. In the present paper, the relative contributions of the four critical stimulus attributes (whether temporal starting points, temporal stopping points, spatial starting points, and spatial stopping points were the same or different between two cars) to the production of errors were quantitatively estimated based on the data for rates of errors obtained by Matsuda. The present analyses made it possible not only to understand numerically the findings about qualitative characteristics of the critical attributes described by Matsuda, but also to add more detailed findings about them.

  1. Proximal antecedents and correlates of adopted error approach: a self-regulatory perspective.

    PubMed

    Van Dyck, Cathy; Van Hooft, Edwin; De Gilder, Dick; Liesveld, Lillian

    2010-01-01

    The current study aims to further investigate earlier established advantages of an error mastery approach over an error aversion approach. The two main purposes of the study relate to (1) self-regulatory traits (i.e., goal orientation and action-state orientation) that may predict which error approach (mastery or aversion) is adopted, and (2) proximal, psychological processes (i.e., self-focused attention and failure attribution) that relate to adopted error approach. In the current study participants' goal orientation and action-state orientation were assessed, after which they worked on an error-prone task. Results show that learning goal orientation related to error mastery, while state orientation related to error aversion. Under a mastery approach, error occurrence did not result in cognitive resources "wasted" on self-consciousness. Rather, attention went to internal-unstable, thus controllable, improvement oriented causes of error. Participants that had adopted an aversion approach, in contrast, experienced heightened self-consciousness and attributed failure to internal-stable or external causes. These results imply that when working on an error-prone task, people should be stimulated to take on a mastery rather than an aversion approach towards errors.

  2. Understanding Why Students Do What They Do: Using Attribution Theory to Help Students Succeed Academically

    ERIC Educational Resources Information Center

    Gaier, Scott E.

    2015-01-01

    According to attribution theory, people seek to make sense of their environment through ascribing causality to their behavior and the behavior of others and these attributions impact future behavior (Jones et al., 1972). In essence, people seek to answer and understand why. This fundamental concept associated with attribution theory is important…

  3. Dopamine prediction error responses integrate subjective value from different reward dimensions

    PubMed Central

    Lak, Armin; Stauffer, William R.; Schultz, Wolfram

    2014-01-01

    Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose “as if” they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values. PMID:24453218

  4. 29 CFR 4211.13 - Modifications to the direct attribution method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Modifications to the direct attribution method. 4211.13... Changes Not Subject to PBGC Approval § 4211.13 Modifications to the direct attribution method. (a) Error in direct attribution method. The unfunded vested benefits allocated to a withdrawing employer under...

  5. Did I Do That? Expectancy Effects of Brain Stimulation on Error-related Negativity and Sense of Agency.

    PubMed

    Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel

    2018-06-19

    This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.

  6. Fundamental limits in heat-assisted magnetic recording and methods to overcome it with exchange spring structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suess, D.; Abert, C.; Bruckner, F.

    2015-04-28

    The switching probability of magnetic elements for heat-assisted recording with pulsed laser heating was investigated. It was found that FePt elements with a diameter of 5 nm and a height of 10 nm show, at a field of 0.5 T, thermally written-in errors of 12%, which is significantly too large for bit-patterned magnetic recording. Thermally written-in errors can be decreased if larger-head fields are applied. However, larger fields lead to an increase in the fundamental thermal jitter. This leads to a dilemma between thermally written-in errors and fundamental thermal jitter. This dilemma can be partly relaxed by increasing the thickness of the FePtmore » film up to 30 nm. For realistic head fields, it is found that the fundamental thermal jitter is in the same order of magnitude of the fundamental thermal jitter in conventional recording, which is about 0.5–0.8 nm. Composite structures consisting of high Curie top layer and FePt as a hard magnetic storage layer can reduce the thermally written-in errors to be smaller than 10{sup −4} if the damping constant is increased in the soft layer. Large damping may be realized by doping with rare earth elements. Similar to single FePt grains in composite structure, an increase of switching probability is sacrificed by an increase of thermal jitter. Structures utilizing first-order phase transitions breaking the thermal jitter and writability dilemma are discussed.« less

  7. The model for Fundamentals of Endovascular Surgery (FEVS) successfully defines the competent endovascular surgeon.

    PubMed

    Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean

    2015-12-01

    Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  8. Biomass and carbon attributes of downed woody materials in forests of the United States

    Treesearch

    C.W. Woodall; B.F. Walters; S.N. Oswalt; G.M. Domke; C. Toney; A.N. Gray

    2013-01-01

    Due to burgeoning interest in the biomass/carbon attributes of forest downed and dead woody materials (DWMs) attributable to its fundamental role in the carbon cycle, stand structure/diversity, bioenergy resources, and fuel loadings, the U.S. Department of Agriculture has conducted a nationwide field-based inventory of DWM. Using the national DWM inventory, attributes...

  9. You Are the Real Terrorist and We Are Just Your Puppet: Using Individual and Group Factors to Explain Indonesian Muslims’ Attributions of Causes of Terrorism

    PubMed Central

    Mashuri, Ali; Akhrani, Lusy Asa; Zaduqisti, Esti

    2016-01-01

    The current study investigates the role of individual and intergroup factors in predicting Muslims’ tendency to attribute domestic terrorism in Indonesia to an external cause (i.e., The West) or an internal cause (i.e., radical Islamist groups). The results (N = 308) showed that intergroup factors of symbolic threat and realistic threat directly increased the external attribution and conversely decreased the internal attribution. Within the context of the current research, symbolic threat refers to Muslims’ perception that the norms and values of the West undermine Islamic identity. Realistic threat denotes Muslims’ perception that the economy and technology of the West undermine Islamic power. The individual factor of Islamic fundamentalism, which has to do with Muslims’ belief in the literal interpretation of and strict guidelines to Islamic doctrines, indirectly predicted both external attribution and internal attribution of terrorism as hypothesized, via the extent to which Muslims perceived the West as posing a symbolic threat, but not a realistic threat to Islamic existence. Uncertainty avoidance, a cultural dimension that describes the extent to which people view clear instructions as a pivotal source of concern to deal with societal problems, also significantly increased perceived symbolic threat and realistic threat, and this cultural dimension mediated the effect of Islamic fundamentalism on each of the intergroup threats. Finally, we found that the level of Islamic fundamentalism was dependent upon cognitive response, but not emotional response to mortality salience. The cognitive response to mortality salience denotes what Muslims are thinking about in coping with their own death whereas the emotional response denotes what Muslims are feeling about such issue. In particular, we found the cognitive response, but not the emotional response to mortality salience significantly gave rise to Muslims’ Islamic fundamentalism. These findings shed light on the importance of combining individual factors and group factors in explicating the dynamics of Muslims’ tendency to make attributions of causes of domestic terrorism. We discuss theoretical implications and study limitations, as well as practical actions policy makers could conduct to deal with Muslims’ Islamic fundamentalism and reduce the extent to which this particular group perceives the West as threatening their existence. PMID:27247694

  10. Assessing Effective Attributes of Followers in a Leadership Process

    ERIC Educational Resources Information Center

    Antelo, Absael; Prilipko, Evgenia V.; Sheridan-Pereira, Margaret

    2010-01-01

    Followership, being an understudied concept, raises fundamental questions: How did followership develop? Why do people submit into becoming followers? The developmental trajectory for the development of individual attributes is as yet, uncharted. Current study provides an overview of assessed attributes of followers, as proposed by Antelo (2010).…

  11. Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.

    ERIC Educational Resources Information Center

    Sheldon, Deborah A.

    1998-01-01

    Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…

  12. Interpreting the Latitudinal Structure of Differences Between Modeled and Observed Temperature Trends (Invited)

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Mears, C. A.; Gleckler, P. J.; Solomon, S.; Wigley, T.; Arblaster, J.; Cai, W.; Gillett, N. P.; Ivanova, D. P.; Karl, T. R.; Lanzante, J.; Meehl, G. A.; Stott, P.; Taylor, K. E.; Thorne, P.; Wehner, M. F.; Zou, C.

    2010-12-01

    We perform the most comprehensive comparison to date of simulated and observed temperature trends. Comparisons are made for different latitude bands, timescales, and temperature variables, using information from a multi-model archive and a variety of observational datasets. Our focus is on temperature changes in the lower troposphere (TLT), the mid- to upper troposphere (TMT), and at the sea surface (SST). For SST, TLT, and TMT, trend comparisons over the satellite era (1979 to 2009) always yield closest agreement in mid-latitudes of the Northern Hemisphere. There are pronounced discrepancies in the tropics and in the Southern Hemisphere: in both regions, the multi-model average warming is consistently larger than observed. At high latitudes in the Northern Hemisphere, the observed tropospheric warming exceeds multi-model average trends. The similarity in the latitudinal structure of this discrepancy pattern across different temperature variables and observational data sets suggests that these trend differences are real, and are not due to residual inhomogeneities in the observations. The interpretation of these results is hampered by the fact that the CMIP-3 multi-model archive analyzed here convolves errors in key external forcings with errors in the model response to forcing. Under a "forcing error" interpretation, model-average temperature trends in the Southern Hemisphere extratropics are biased warm because many models neglect (and/or inaccurately specify) changes in stratospheric ozone and the indirect effects of aerosols. An alternative "response error" explanation for the model trend errors is that there are fundamental problems with model clouds and ocean heat uptake over the Southern Ocean. When SST changes are compared over the longer period 1950 to 2009, there is close agreement between simulated and observed trends poleward of 50°S. This result is difficult to reconcile with the hypothesis that the trend discrepancies over 1979 to 2009 are primarily attributable to response errors. Our results suggest that biases in multi-model average temperature trends over the satellite era can be plausibly linked to forcing errors. Better partitioning of the forcing and response components of model errors will require a systematic program of numerical experimentation, with a focus on exploring the climate response to uncertainties in key historical forcings.

  13. Sudden Possibilities: Porpoises, Eggcorns, and Error

    ERIC Educational Resources Information Center

    Crovitz, Darren

    2011-01-01

    This article discusses how amusing mistakes can make for serious language instruction. The notion that close analysis of language errors can yield insight into how one thinks and learns seems fundamentally obvious. Yet until relatively recently, language errors were primarily treated as indicators of learner deficiency rather than opportunities to…

  14. Strategies for Detecting and Correcting Errors in Accounting Problems.

    ERIC Educational Resources Information Center

    James, Marianne L.

    2003-01-01

    Reviews common errors in accounting tests that students commit resulting from deficiencies in fundamental prior knowledge, ineffective test taking, and inattention to detail and provides solutions to the problems. (JOW)

  15. There's no team in I: How observers perceive individual creativity in a team setting.

    PubMed

    Kay, Min B; Proudfoot, Devon; Larrick, Richard P

    2018-04-01

    Creativity is highly valued in organizations as an important source of innovation. As most creative projects require the efforts of groups of individuals working together, it is important to understand how creativity is perceived for team products, including how observers attribute creative ability to focal actors who worked as part of a creative team. Evidence from three experiments suggests that observers commit the fundamental attribution error-systematically discounting the contribution of the group when assessing the creative ability of a single group representative, particularly when the group itself is not visually salient. In a pilot study, we found that, in the context of the design team at Apple, a target group member visually depicted alone is perceived to have greater personal creative ability than when he is visually depicted with his team. In Study 1, using a sample of managers, we conceptually replicated this finding and further observed that, when shown alone, a target member of a group that produced a creative product is perceived to be as creative as an individual described as working alone on the same output. In Study 2, we replicated the findings of Study 1 and also observed that a target group member depicted alone, rather than with his team, is also attributed less creative ability for uncreative group output. Findings are discussed in light of how overattribution of individual creative ability can harm organizations in the long run. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm

    NASA Astrophysics Data System (ADS)

    Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.

    2017-01-01

    This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.

  17. Error model for the SAO 1969 standard earth.

    NASA Technical Reports Server (NTRS)

    Martin, C. F.; Roy, N. A.

    1972-01-01

    A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.

  18. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  19. Call to Adopt a Nominal Set of Astrophysical Parameters and Constants to Improve the Accuracy of Fundamental Physical Properties of Stars

    NASA Astrophysics Data System (ADS)

    Harmanec, Petr; Prša, Andrej

    2011-08-01

    The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius R⊙ and luminosity L⊙ by the nominal values that are by definition exact and expressed in SI units: and ; (2) computing stellar masses in terms of M⊙ by noting that the measurement error of the product GM⊙ is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values and ; and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.

  20. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  1. Reduction of errors during practice facilitates fundamental movement skill learning in children with intellectual disabilities.

    PubMed

    Capio, C M; Poolton, J M; Sit, C H P; Eguia, K F; Masters, R S W

    2013-04-01

    Children with intellectual disabilities (ID) have been found to have inferior motor proficiencies in fundamental movement skills (FMS). This study examined the effects of training the FMS of overhand throwing by manipulating the amount of practice errors. Participants included 39 children with ID aged 4-11 years who were allocated into either an error-reduced (ER) training programme or a more typical programme in which errors were frequent (error-strewn, ES). Throwing movement form, throwing accuracy, and throwing frequency during free play were evaluated. The ER programme improved movement form, and increased throwing activity during free play to a greater extent than the ES programme. Furthermore, ER learners were found to be capable of engaging in a secondary cognitive task while manifesting robust throwing accuracy performance. The findings support the use of movement skills training programmes that constrain practice errors in children with ID, suggesting that such approach results in improved performance and heightened movement engagement in free play. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 Blackwell Publishing Ltd.

  2. Fundamental frequency perturbation indicates perceived health and age in male and female speakers

    NASA Astrophysics Data System (ADS)

    Feinberg, David R.

    2004-05-01

    There is strong support for the idea that healthy vocal chords are able to produce fundamental frequencies (F0) with minimal perturbation. Measures of F0 perturbation have been shown to discriminate pathological versus healthy populations. In addition to measuring vocal chord health, F0 perturbation is a correlate of real and perceived age. Here, the role of jitter (periodic variation in F0) and shimmer (periodic variation in amplitude of F0) in perceived health and age in a young adult (males aged 18-33, females aged 18-26), nondysphonic population was investigated. Voices were assessed for health and age by peer aged, opposite-sex raters. Jitter and shimmer were measured with Praat software (www.praat.org) using various algorithms (jitter: DDP, local, local absolute, PPQ5, and RAP; shimmer: DDA, local, local absolute, APQ3, APQ5, APQ11) to reduce measurement error, and to ascertain the robustness of the findings. Male and female voices were analyzed separately. In both sexes, ratings of health and age were significantly correlated. Measures of jitter and shimmer correlated negatively with perceived health, and positively with perceived age. Further analysis revealed that these effects were independent in male voices. Implications of this finding are that attributions of vocal health and age may reflect actual underlying condition.

  3. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less

  4. Skeletal maturation, fundamental motor skills and motor coordination in children 7-10 years.

    PubMed

    Freitas, Duarte L; Lausen, Berthold; Maia, José António; Lefevre, Johan; Gouveia, Élvio Rúbio; Thomis, Martine; Antunes, António Manuel; Claessens, Albrecht L; Beunen, Gaston; Malina, Robert M

    2015-01-01

    Relationships between skeletal maturation and fundamental motor skills and gross motor coordination were evaluated in 429 children (213 boys and 216 girls) 7-10 years. Skeletal age was assessed (Tanner-Whitehouse 2 method), and stature, body mass, motor coordination (Körperkoordinations Test für Kinder, KTK) and fundamental motor skills (Test of Gross Motor Development, TGMD-2) were measured. Relationships among chronological age, skeletal age (expressed as the standardised residual of skeletal age on chronological age) and body size and fundamental motor skills and motor coordination were analysed with hierarchical multiple regression. Standardised residual of skeletal age on chronological age interacting with stature and body mass explained a maximum of 7.0% of the variance in fundamental motor skills and motor coordination over that attributed to body size per se. Standardised residual of skeletal age on chronological age alone accounted for a maximum of 9.0% of variance in fundamental motor skills, and motor coordination over that attributed to body size per se and interactions between standardised residual of skeletal age on chronological age and body size. In conclusion, skeletal age alone or interacting with body size has a negligible influence on fundamental motor skills and motor coordination in children 7-10 years.

  5. Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.

    PubMed

    Barrs, H D

    1965-07-02

    A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.

  6. Feedback control of one's own action: Self-other sensory attribution in motor control.

    PubMed

    Asai, Tomohisa

    2015-12-15

    The sense of agency, the subjective experience of controlling one's own action, has an important function in motor control. When we move our own body or even external tools, we attribute that movement to ourselves and utilize that sensory information in order to correct "our own" movement in theory. The dynamic relationship between conscious self-other attribution and feedback control, however, is still unclear. Participants were required to make a sinusoidal reaching movement and received its visual feedback (i.e., cursor). When participants received a fake movement that was spatio-temporally close to their actual movement, illusory self-attribution of the fake movement was observed. In this situation, since participants tried to control the cursor but it was impossible to do so, the movement error was increased (Experiment 1). However, when the visual feedback was reduced to make self-other attribution difficult, there was no further increase in the movement error (Experiment 2). These results indicate that conscious self-other sensory attribution might coordinate sensory input and motor output. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Repeated Speech Errors: Evidence for Learning

    ERIC Educational Resources Information Center

    Humphreys, Karin R.; Menzies, Heather; Lake, Johanna K.

    2010-01-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological…

  8. Dopaminergic dysfunction in schizophrenia: salience attribution revisited.

    PubMed

    Heinz, Andreas; Schlagenhauf, Florian

    2010-05-01

    A dysregulation of the mesolimbic dopamine system in schizophrenia patients may lead to aberrant attribution of incentive salience and contribute to the emergence of psychopathological symptoms like delusions. The dopaminergic signal has been conceptualized to represent a prediction error that indicates the difference between received and predicted reward. The incentive salience hypothesis states that dopamine mediates the attribution of "incentive salience" to conditioned cues that predict reward. This hypothesis was initially applied in the context of drug addiction and then transferred to schizophrenic psychosis. It was hypothesized that increased firing (chaotic or stress associated) of dopaminergic neurons in the striatum of schizophrenia patients attributes incentive salience to otherwise irrelevant stimuli. Here, we review recent neuroimaging studies directly addressing this hypothesis. They suggest that neuronal functions associated with dopaminergic signaling, such as the attribution of salience to reward-predicting stimuli and the computation of prediction errors, are indeed altered in schizophrenia patients and that this impairment appears to contribute to delusion formation.

  9. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  10. Intransparent German number words complicate transcoding - a translingual comparison with Japanese.

    PubMed

    Moeller, Korbinian; Zuber, Julia; Olsen, Naoko; Nuerk, Hans-Christoph; Willmes, Klaus

    2015-01-01

    Superior early numerical competencies of children in several Asian countries have (amongst others) been attributed to the higher transparency of their number word systems. Here, we directly investigated this claim by evaluating whether Japanese children's transcoding performance when writing numbers to dictation (e.g., "twenty five" → 25) was less error prone than that of German-speaking children - both in general as well as when considering language-specific attributes of the German number word system such as the inversion property, in particular. In line with this hypothesis we observed that German-speaking children committed more transcoding errors in general than their Japanese peers. Moreover, their error pattern reflected the specific inversion intransparency of the German number-word system. Inversion errors in transcoding represented the most prominent error category in German-speaking children, but were almost absent in Japanese-speaking children. We conclude that the less transparent German number-word system complicates the acquisition of the correspondence between symbolic Arabic numbers and their respective verbal number words.

  11. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    NASA Astrophysics Data System (ADS)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more economical way. A terrain-dependent-search algorithm is formulated to facilitate the search for correspondences in a quasi-stereo pair of images. The calculated heights for sample buildings using cross-sensor data fusion algorithm show an average coefficient of variation 1.03%. In order to infer structural-type and occupancy-type, i.e. engineering attributes, of buildings from spatial and geometric attributes of 3-D models, a statistical data analysis framework is formulated. Applications of "Classification Trees" and "Multinomial Logistic Models" in modeling the marginal probabilities of class-membership of engineering attributes are investigated. Adaptive statistical models to incorporate different spatial and geometric attributes of buildings---while inferring the engineering attributes---are developed in this dissertation. The inferred engineering attributes in conjunction with the spatial and geometric attributes derived from the imagery can be used to augment regional building inventories and therefore enhance the result of catastrophe models. In the last part of the dissertation, a set of empirically-derived motion-damage relationships based on the correlation of observed building performance with measured ground-motion parameters from 1994 Northridge and 1999 Chi-Chi Taiwan earthquakes are developed. Fragility functions in the form of cumulative lognormal distributions and damage probability matrices for several classes of buildings (wood, steel and concrete), as well as number of ground-motion intensity measures are developed and compared to currently-used motion-damage relationships.

  12. Linking performance decline to choking: players' perceptions in basketball.

    PubMed

    Fryer, Ashley Marie; Tenenbaum, Gershon; Chow, Graig M

    2018-02-01

    This study was aimed at examining how basketball players view unexpected performance errors in basketball, and under what conditions they perceive them as choking. Fifty-three basketball players were randomly assigned into 2 groups (game half) to evaluate the linkage between performance decline and choking as a function of game-time, score gap and game half. Within each group, players viewed 8 scenario clips, which featured a different player conducting an error, and subsequently rated the extent of performance decline, the instance of choking and the salience of various performance attributions regarding the error. The analysis revealed that choking was most salient in the 2nd half of the game, but an error was perceived as choking more saliently in the beginning of the 2nd half. This trend was also shown for players' perception of performance decline. Players' ratings of the attributions assigned to errors, however, revealed that during the end of the 2nd half, time pressure and lack of concentration were the causes of errors. Overall, the results provide evidence towards a conceptual framework linking performance decline to the perception of choking, and that errors conducted by players are perceived as choking when there is not a salient reason to suggest its occurrence.

  13. The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

    PubMed Central

    Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

    2012-01-01

    Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

  14. Eliciting the Functional Processes of Apologizing for Errors in Health Care

    PubMed Central

    Prothero, Marie M.; Morse, Janice M.

    2017-01-01

    The purpose of this article was to analyze the concept development of apology in the context of errors in health care, the administrative response, policy and format/process of the subsequent apology. Using pragmatic utility and a systematic review of the literature, 29 articles and one book provided attributes involved in apologizing. Analytic questions were developed to guide the data synthesis and types of apologies used in different circumstances identified. The antecedents of apologizing, and the attributes and outcomes were identified. A model was constructed illustrating the components of a complete apology, other types of apologies, and ramifications/outcomes of each. Clinical implications of developing formal policies for correcting medical errors through apologies are recommended. Defining the essential elements of apology is the first step in establishing a just culture in health care. Respect for patient-centered care reduces the retaliate consequences following an error, and may even restore the physician patient relationship. PMID:28540337

  15. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    PubMed

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  16. Spectrum-averaged Harmonic Path (SHAPA) algorithm for non-contact vital sign monitoring with ultra-wideband (UWB) radar.

    PubMed

    Van Nguyen; Javaid, Abdul Q; Weitnauer, Mary Ann

    2014-01-01

    We introduce the Spectrum-averaged Harmonic Path (SHAPA) algorithm for estimation of heart rate (HR) and respiration rate (RR) with Impulse Radio Ultrawideband (IR-UWB) radar. Periodic movement of human torso caused by respiration and heart beat induces fundamental frequencies and their harmonics at the respiration and heart rates. IR-UWB enables capture of these spectral components and frequency domain processing enables a low cost implementation. Most existing methods of identifying the fundamental component either in frequency or time domain to estimate the HR and/or RR lead to significant error if the fundamental is distorted or cancelled by interference. The SHAPA algorithm (1) takes advantage of the HR harmonics, where there is less interference, and (2) exploits the information in previous spectra to achieve more reliable and robust estimation of the fundamental frequency in the spectrum under consideration. Example experimental results for HR estimation demonstrate how our algorithm eliminates errors caused by interference and produces 16% to 60% more valid estimates.

  17. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  18. Team Attributes, Processes, and Values: A Pedagogical Framework

    ERIC Educational Resources Information Center

    Keyton, Joann; Beck, Stephenson J.

    2008-01-01

    This article proposes a pedagogical framework to help students analyze their group and team interactions. Intersecting five fundamental group attributes (group size, group goal, group member interdependence, group structure, and group identity) with three overarching group processes (leadership, decision making, and conflict management) creates an…

  19. Why leaders don't learn from success.

    PubMed

    Gino, Francesca; Pisano, Gary P

    2011-04-01

    What causes so many companies that once dominated their industries to slide into decline? In this article, two Harvard Business School professors argue that such firms lose their touch because success breeds failure by impeding learning at both the individual and organizational levels. When we succeed, we assume that we know what we are doing, but it could be that we just got lucky. We make what psychologists call fundamental attribution errors, giving too much credit to our talents and strategy and too tittle to environmental factors and random events. We develop an overconfidence bias, becoming so self-assured that we think we don't need to change anything. We also experience the failure-to-ask-why syndrome and neglect to investigate the causes of good performance. To overcome these three learning impediments, executives should examine successes with the same scrutiny they apply to failures. Companies should implement systematic after-action reviews to understand all the factors that led to a win, and test their theories by conducting experiments even if "it ain't broke."

  20. Chemical Reaction Rates from Ring Polymer Molecular Dynamics: Zero Point Energy Conservation in Mu + H2 → MuH + H.

    PubMed

    Pérez de Tudela, Ricardo; Aoiz, F J; Suleimanov, Yury V; Manolopoulos, David E

    2012-02-16

    A fundamental issue in the field of reaction dynamics is the inclusion of the quantum mechanical (QM) effects such as zero point energy (ZPE) and tunneling in molecular dynamics simulations, and in particular in the calculation of chemical reaction rates. In this work we study the chemical reaction between a muonium atom and a hydrogen molecule. The recently developed ring polymer molecular dynamics (RPMD) technique is used, and the results are compared with those of other methods. For this reaction, the thermal rate coefficients calculated with RPMD are found to be in excellent agreement with the results of an accurate QM calculation. The very minor discrepancies are within the convergence error even at very low temperatures. This exceptionally good agreement can be attributed to the dominant role of ZPE in the reaction, which is accounted for extremely well by RPMD. Tunneling only plays a minor role in the reaction.

  1. Metonymy and reference-point errors in novice programming

    NASA Astrophysics Data System (ADS)

    Miller, Craig S.

    2014-07-01

    When learning to program, students often mistakenly refer to an element that is structurally related to the element that they intend to reference. For example, they may indicate the attribute of an object when their intention is to reference the whole object. This paper examines these reference-point errors through the context of metonymy. Metonymy is a rhetorical device where the speaker states a referent that is structurally related to the intended referent. For example, the following sentence states an office bureau but actually refers to a person working at the bureau: The tourist asked the travel bureau for directions to the museum. Drawing upon previous studies, I discuss how student reference errors may be consistent with the use of metonymy. In particular, I hypothesize that students are more likely to reference an identifying element even when a structurally related element is intended. I then present two experiments, which produce results consistent with this analysis. In both experiments, students are more likely to produce reference-point errors that involve identifying attributes than descriptive attributes. Given these results, I explore the possibility that students are relying on habits of communication rather than the mechanistic principles needed for successful programming. Finally I discuss teaching interventions using live examples and how metonymy may be presented to non-computing students as pedagogy for computational thinking.

  2. Attribution and social cognitive neuroscience: a new approach for the "online-assessment" of causality ascriptions and their emotional consequences.

    PubMed

    Terbeck, Sylvia; Chesterman, Paul; Fischmeister, Florian Ph S; Leodolter, Ulrich; Bauer, Herbert

    2008-08-15

    Attribution theory plays a central role in understanding cognitive processes that have emotional consequences; however, there has been very limited attention to its neural basis. After reviewing classical studies in social psychology in which attribution has been experimentally manipulated we developed a new approach that allows the investigation of state attributions and emotional consequences using neuroscience methodologies. Participants responded to the Erikson Flanker Task, but, in order to maintain the participant's beliefs about the nature of the task and to produce a significant number of error responses, an adaptive algorithm tuned the available time to respond such that, dependent on the subject's current performance, the negative feedback rate was held at chance level. In order to initiate variation in attribution participants were informed that one and the same task was either easy or difficult. As a result of these two different instructions the two groups differed significantly in error attribution only on the locus of causality dimension. Additionally, attributions were found to be stable over a large number of trials, while accuracy and reaction time remained the same. Thus, the new paradigm is particularly suitable for cognitive neuroscience research that evaluates brain behaviour relationships of higher order processes in 'simulated achievement settings'.

  3. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  4. Medication errors: the role of the patient.

    PubMed

    Britten, Nicky

    2009-06-01

    1. Patients and their carers will usually be the first to notice any observable problems resulting from medication errors. They will probably be unable to distinguish between medication errors, adverse drug reactions, or 'side effects'. 2. Little is known about how patients understand drug related problems or how they make attributions of adverse effects. Some research suggests that patients' cognitive models of adverse drug reactions bear a close relationship to models of illness perception. 3. Attributions of adverse drug reactions are related to people's previous experiences and to their level of education. The evidence suggests that on the whole patients' reports of adverse drug reactions are accurate. However, patients do not report all the problems they perceive and are more likely to report those that they do perceive as severe. Patients may not report problems attributed to their medications if they are fearful of doctors' reactions. Doctors may respond inappropriately to patients' concerns, for example by ignoring them. Some authors have proposed the use of a symptom checklist to elicit patients' reports of suspected adverse drug reactions. 4. Many patients want information about adverse drug effects, and the challenge for the professional is to judge how much information to provide and the best way of doing so. Professionals' inappropriate emphasis on adherence may be dangerous when a medication error has occurred. 5. Recent NICE guidelines recommend that professionals should ask patients if they have any concerns about their medicines, and this approach is likely to yield information conducive to the identification of medication errors.

  5. Behind Human Error: Cognitive Systems, Computers and Hindsight

    DTIC Science & Technology

    1994-12-01

    evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the

  6. Artificial Neural Networks applied to estimate permeability, porosity and intrinsic attenuation using seismic attributes and well-log data

    NASA Astrophysics Data System (ADS)

    Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2014-08-01

    Permeability and porosity are two fundamental reservoir properties which relate to the amount of fluid contained in a reservoir and its ability to flow. The intrinsic attenuation is another important parameter since it is related to porosity, permeability, oil and gas saturation and these parameters significantly affect the seismic signature of a reservoir. We apply Artificial Neural Network (ANN) models to predict permeability (k) and porosity (ϕ) for a carbonate aquifer in southeastern Florida and to predict intrinsic attenuation (1/Q) for a sand-shale oil reservoir in northeast Texas. In this study, the Gamma test (a revolutionary estimator of the noise in a data set) has been used as a mathematically non-parametric nonlinear smooth modeling tool to choose the best input combination of seismic attributes to estimate k and ϕ, and the best combination of well-logs to estimate 1/Q. This saves time during the construction and training of ANN models and also sets a lower bound for the mean squared error to prevent over-training. The Neural Network method successfully delineates a highly permeable zone that corresponds to a high water production in the aquifer. The Gamma test found nonlinear relations that were not visible to linear regression allowing us to generalize the ANN estimations of k, ϕ and 1/Q for their respective sets of patterns that were not used during the learning phase.

  7. Controlling the non-linear intracavity dynamics of large He-Ne laser gyroscopes

    NASA Astrophysics Data System (ADS)

    Cuccato, D.; Beghi, A.; Belfi, J.; Beverini, N.; Ortolan, A.; Di Virgilio, A.

    2014-02-01

    A model based on Lamb's theory of gas lasers is applied to a He-Ne ring laser (RL) gyroscope to estimate and remove the laser dynamics contribution from the rotation measurements. The intensities of the counter-propagating laser beams exiting one cavity mirror are continuously observed together with a monitor of the laser population inversion. These observables, once properly calibrated with a dedicated procedure, allow us to estimate cold cavity and active medium parameters driving the main part of the non-linearities of the system. The quantitative estimation of intrinsic non-reciprocal effects due to cavity and active medium non-linear coupling plays a key role in testing fundamental symmetries of space-time with RLs. The parameter identification and noise subtraction procedure has been verified by means of a Monte Carlo study of the system, and experimentally tested on the G-PISA RL oriented with the normal to the ring plane almost parallel to the Earth's rotation axis. In this configuration the Earth's rotation rate provides the maximum Sagnac effect while the contribution of the orientation error is reduced to a minimum. After the subtraction of laser dynamics by a Kalman filter, the relative systematic errors of G-PISA reduce from 50 to 5 parts in 103 and can be attributed to the residual uncertainties on geometrical scale factor and orientation of the ring.

  8. Back to basics: The effects of block vs. interleaved trial administration on pro- and anti-saccade performance

    PubMed Central

    Zeligman, Liran; Zivotofsky, Ari Z.

    2017-01-01

    The pro and anti-saccade task (PAT) is a widely used tool in the study of overt and covert attention with promising potential role in neurocognitive and psychiatric assessment. However, specific PAT protocols can vary significantly between labs, potentially resulting in large variations in findings across studies. In light of recent calls towards a standardization of PAT the current study's objective was to systematically and purposely evaluate the effects of block vs. interleaved administration—a fundamental consideration—on PAT measures in a within subject design. Additionally, this study evaluated whether measures of a Posner-type cueing paradigm parallels measures of the PAT paradigm. As hypothesized, results indicate that PAT performance is highly susceptible to administration mode. Interleaved mode resulted in larger error rates not only for anti (blocks: M = 22%; interleaved: M = 42%) but also for pro-saccades (blocks: M = 5%; interleaved: M = 12%). This difference between block and interleaved administration was significantly larger in anti-saccades compared to pro-saccades and cannot be attributed to a 'speed/accuracy tradeoff'. Interleaved mode produced larger pro and anti-saccade differences in error rates while block administration produced larger latency differences. Results question the reflexive nature of pro-saccades, suggesting they are not purely reflexive. These results were further discussed and compared to previous studies that included within subject data of blocks and interleaved trials. PMID:28222173

  9. Minimizing Accidents and Risks in High Adventure Outdoor Pursuits.

    ERIC Educational Resources Information Center

    Meier, Joel

    The fundamental dilemma in adventure programming is eliminating unreasonable risks to participants without also reducing levels of excitement, challenge, and stress. Most accidents are caused by a combination of unsafe conditions, unsafe acts, and error judgments. The best and only way to minimize critical human error in adventure programs is…

  10. Credit Assignment in a Motor Decision Making Task Is Influenced by Agency and Not Sensory Prediction Errors.

    PubMed

    Parvin, Darius E; McDougle, Samuel D; Taylor, Jordan A; Ivry, Richard B

    2018-05-09

    Failures to obtain reward can occur from errors in action selection or action execution. Recently, we observed marked differences in choice behavior when the failure to obtain a reward was attributed to errors in action execution compared with errors in action selection (McDougle et al., 2016). Specifically, participants appeared to solve this credit assignment problem by discounting outcomes in which the absence of reward was attributed to errors in action execution. Building on recent evidence indicating relatively direct communication between the cerebellum and basal ganglia, we hypothesized that cerebellar-dependent sensory prediction errors (SPEs), a signal indicating execution failure, could attenuate value updating within a basal ganglia-dependent reinforcement learning system. Here we compared the SPE hypothesis to an alternative, "top-down" hypothesis in which changes in choice behavior reflect participants' sense of agency. In two experiments with male and female human participants, we manipulated the strength of SPEs, along with the participants' sense of agency in the second experiment. The results showed that, whereas the strength of SPE had no effect on choice behavior, participants were much more likely to discount the absence of rewards under conditions in which they believed the reward outcome depended on their ability to produce accurate movements. These results provide strong evidence that SPEs do not directly influence reinforcement learning. Instead, a participant's sense of agency appears to play a significant role in modulating choice behavior when unexpected outcomes can arise from errors in action execution. SIGNIFICANCE STATEMENT When learning from the outcome of actions, the brain faces a credit assignment problem: Failures of reward can be attributed to poor choice selection or poor action execution. Here, we test a specific hypothesis that execution errors are implicitly signaled by cerebellar-based sensory prediction errors. We evaluate this hypothesis and compare it with a more "top-down" hypothesis in which the modulation of choice behavior from execution errors reflects participants' sense of agency. We find that sensory prediction errors have no significant effect on reinforcement learning. Instead, instructions influencing participants' belief of causal outcomes appear to be the main factor influencing their choice behavior. Copyright © 2018 the authors 0270-6474/18/384521-10$15.00/0.

  11. Eliciting the Functional Processes of Apologizing for Errors in Health Care: Developing an Explanatory Model of Apology.

    PubMed

    Prothero, Marie M; Morse, Janice M

    2017-01-01

    The purpose of this article was to analyze the concept development of apology in the context of errors in health care, the administrative response, policy and format/process of the subsequent apology. Using pragmatic utility and a systematic review of the literature, 29 articles and one book provided attributes involved in apologizing. Analytic questions were developed to guide the data synthesis and types of apologies used in different circumstances identified. The antecedents of apologizing, and the attributes and outcomes were identified. A model was constructed illustrating the components of a complete apology, other types of apologies, and ramifications/outcomes of each. Clinical implications of developing formal policies for correcting medical errors through apologies are recommended. Defining the essential elements of apology is the first step in establishing a just culture in health care. Respect for patient-centered care reduces the retaliate consequences following an error, and may even restore the physician patient relationship.

  12. Verification results for the Spectral Ocean Wave Model (SOWM) by means of significant wave height measurements made by the GEOS-3 spacecraft

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.; Salfi, R. E.

    1978-01-01

    Significant wave heights estimated from the shape of the return pulse wave form of the altimeter on GEOS-3 for forty-four orbit segments obtained during 1975 and 1976 are compared with the significant wave heights specified by the spectral ocean wave model (SOWM), which is the presently operational numerical wave forecasting model at the Fleet Numerical Weather Central. Except for a number of orbit segments with poor agreement and larger errors, the SOWM specifications tended to be biased from 0.5 to 1.0 meters too low and to have RMS errors of 1.0 to 1.4 meters. The much fewer larger errors can be attributed to poor wind data for some parts of the Northern Hemisphere oceans. The bias can be attributed to the somewhat too light winds used to generate the waves in the model. Other sources of error are identified in the equatorial and trade wind areas.

  13. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    NASA Astrophysics Data System (ADS)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  14. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    DOE PAGES

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.; ...

    2018-02-28

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less

  15. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less

  16. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.; Klein, S. A.; Ma, H.-Y.; Zhang, C.; Xie, S.; Tang, Q.; Gustafson, W. I.; Qian, Y.; Berg, L. K.; Liu, Y.; Huang, M.; Ahlgrimm, M.; Forbes, R.; Bazile, E.; Roehrig, R.; Cole, J.; Merryfield, W.; Lee, W.-S.; Cheruy, F.; Mellul, L.; Wang, Y.-C.; Johnson, K.; Thieman, M. M.

    2018-04-01

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stations near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.

  17. A critical eye: praise directed toward traits increases children's eye fixations on errors and decreases motivation.

    PubMed

    Zentall, Shannon R; Morris, Bradley J

    2012-12-01

    Although there is evidence that praise of different types (i.e., generic vs. nongeneric) influences motivation, it is unclear how this occurs. Generic praise (e.g., "You are smart") conveys that a child possesses a trait responsible for their performance, whereas nongeneric praise (e.g., "You worked hard") conveys that performance is effort-based. Because praise conveys the basis for success, praise may change the interpretation and salience of errors. Specifically, generic praise may highlight the threatening nature of error (i.e., the child does not possess this trait). Because attention is drawn to threats in the environment, we expected generic praise to increase attention to error. We used eyetracking to measure implicit responses to errors (i.e., visual attention: fixation counts and durations) in order to determine the relation between visual attention and verbal reports of motivation (persistence and self-evaluations) in 30 four- to seven-year-old children. Children first saw pictures attributed to them, for which they received either generic or nongeneric praise. The children then saw pictures attributed to them that contained errors--that is, missing features. As a pretest and posttest, the children saw pictures that were "drawn by other children," half of which contained errors. The results indicated that children who received generic praise ("you are a good drawer") produced more and longer fixations on errors, both their "own" and on "other children's," than did children who received nongeneric praise ("you did a good job drawing"). More fixations on errors were related to lower persistence and lower self-evaluations. These results suggest that generic praise increases attention to errors because error threatens the possession of a positive trait.

  18. Comparison of cluster-based and source-attribution methods for estimating transmission risk using large HIV sequence databases.

    PubMed

    Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M

    2018-06-01

    Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Is Causal Attribution of Sexual Deviance the Source of Thinking Errors?

    ERIC Educational Resources Information Center

    Paulauskas, Roland

    2013-01-01

    Adult and juvenile offenders exhibit a number of cognitive distortions related to sexually offending behaviors. The latter may be attributed to their developmental deficiencies, the result of operant conditioning, psychological self-defense mechanisms and biases, influence of negative environmental factors or criminal subculture. A group of…

  20. QUANTIFYING UNCERTAINTY IN NET PRIMARY PRODUCTION MEASUREMENTS

    EPA Science Inventory

    Net primary production (NPP, e.g., g m-2 yr-1), a key ecosystem attribute, is estimated from a combination of other variables, e.g. standing crop biomass at several points in time, each of which is subject to errors in their measurement. These errors propagate as the variables a...

  1. Measurement variability error for estimates of volume change

    Treesearch

    James A. Westfall; Paul L. Patterson

    2007-01-01

    Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...

  2. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  3. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  4. Presentation of nursing diagnosis content in fundamentals of nursing textbooks.

    PubMed

    Mahon, S M; Spies, M A; Aukamp, V; Barrett, J T; Figgins, M J; Meyer, G A; Young, V K

    1997-01-01

    The technique and rationale for the use of nursing diagnosis generally are introduced early in the undergraduate curriculum. The three purposes of this descriptive study were to describe the general characteristics and presentation of content on nursing diagnosis in fundamentals of nursing textbooks; describe how the content from the theoretical chapter(s) in nursing diagnosis is carried through in the clinical chapters; and describe how content on diagnostic errors is presented. Although most of the textbooks presented content on nursing diagnosis in a similar fashion, the clinical chapters of the books did not follow the same pattern. Content on diagnostic errors was inconsistent. Educators may find this an effective methodology for reviewing textbooks.

  5. Acoustic sensor for real-time control for the inductive heating process

    DOEpatents

    Kelley, John Bruce; Lu, Wei-Yang; Zutavern, Fred J.

    2003-09-30

    Disclosed is a system and method for providing closed-loop control of the heating of a workpiece by an induction heating machine, including generating an acoustic wave in the workpiece with a pulsed laser; optically measuring displacements of the surface of the workpiece in response to the acoustic wave; calculating a sub-surface material property by analyzing the measured surface displacements; creating an error signal by comparing an attribute of the calculated sub-surface material properties with a desired attribute; and reducing the error signal below an acceptable limit by adjusting, in real-time, as often as necessary, the operation of the inductive heating machine.

  6. Causes and Prevention of Laparoscopic Bile Duct Injuries

    PubMed Central

    Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.

    2003-01-01

    Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139

  7. Factors associated with disclosure of medical errors by housestaff.

    PubMed

    Kronman, Andrea C; Paasche-Orlow, Michael; Orlander, Jay D

    2012-04-01

    Attributes of the organisational culture of residency training programmes may impact patient safety. Training environments are complex, composed of clinical teams, residency programmes, and clinical units. We examined the relationship between residents' perceptions of their training environment and disclosure of or apology for their worst error. Anonymous, self-administered surveys were distributed to Medicine and Surgery residents at Boston Medical Center in 2005. Surveys asked residents to describe their worst medical error, and to answer selected questions from validated surveys measuring elements of working environments that promote learning from error. Subscales measured the microenvironments of the clinical team, residency programme, and clinical unit. Univariate and bivariate statistical analyses examined relationships between trainee characteristics, their perceived learning environment(s), and their responses to the error. Out of 109 surveys distributed to residents, 99 surveys were returned (91% overall response rate), two incomplete surveys were excluded, leaving 97: 61% internal medicine, 39% surgery, 59% male residents. While 31% reported apologising for the situation associated with the error, only 17% reported disclosing the error to patients and/or family. More male residents disclosed the error than female residents (p=0.04). Surgery residents scored higher on the subscales of safety culture pertaining to the residency programme (p=0.02) and managerial commitment to safety (p=0.05). Our Medical Culture Summary score was positively associated with disclosure (p=0.04) and apology (p=0.05). Factors in the learning environments of residents are associated with responses to medical errors. Organisational safety culture can be measured, and used to evaluate environmental attributes of clinical training that are associated with disclosure of, and apology for, medical error.

  8. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  9. Analysis of nonlocal thermodynamic equilibrium CO 4.7 μm fundamental, isotopic, and hot band emissions measured by the Michelson Interferometer for Passive Atmospheric Sounding on Envisat

    NASA Astrophysics Data System (ADS)

    Funke, B.; López-Puertas, M.; Bermejo-Pantaleón, D.; von Clarmann, T.; Stiller, G. P.; HöPfner, M.; Grabowski, U.; Kaufmann, M.

    2007-06-01

    Nonlocal thermodynamic equilibrium (non-LTE) simulations of the 12C16O(1 → 0) fundamental band, the 12C16O(2 → 1) hot band, and the isotopic 13C16O(1 → 0) band performed with the Generic Radiative Transfer and non-LTE population Algorithm (GRANADA) and the Karlsruhe Optimized and Precise Radiative Transfer Algorithm (KOPRA) have been compared to spectrally resolved 4.7 μm radiances measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). The performance of the non-LTE simulation has been assessed in terms of band radiance ratios in order to avoid a compensation of possible non-LTE model errors by retrieval errors in the CO abundances inferred from MIPAS data with the same non-LTE algorithms. The agreement with the measurements is within 5% for the fundamental band and within 10% for the hot band. Simulated 13C16O radiances agree with the measurements within the instrumental noise error. Solar reflectance at the surface or clouds has been identified as an important additional excitation mechanism for the CO(2) state. The study represents a thorough validation of the non-LTE scheme used in the retrieval of CO abundances from MIPAS data.

  10. [Errors in Peruvian medical journals references].

    PubMed

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  11. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  12. 26 CFR 301.6404-2 - Abatement of interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... taxes) attributable in whole or in part to any unreasonable error or delay by an officer or employee of... is attributable to an officer or employee of the IRS (acting in an official capacity) being erroneous... taxpayer and the IRS. Before the notice of deficiency is prepared and reviewed, a clerical employee...

  13. Metonymy and Reference-Point Errors in Novice Programming

    ERIC Educational Resources Information Center

    Miller, Craig S.

    2014-01-01

    When learning to program, students often mistakenly refer to an element that is structurally related to the element that they intend to reference. For example, they may indicate the attribute of an object when their intention is to reference the whole object. This paper examines these reference-point errors through the context of metonymy.…

  14. Variability in Stepping Direction Explains the Veering Behavior of Blind Walkers

    ERIC Educational Resources Information Center

    Kallie, Christopher S.; Schrater, Paul R.; Legge, Gordon E.

    2007-01-01

    Walking without vision results in veering, an inability to maintain a straight path that has important consequences for blind pedestrians. In this study, the authors addressed whether the source of veering in the absence of visual and auditory feedback is better attributed to errors in perceptual encoding or undetected motor error. Three…

  15. Active control of fan noise from a turbofan engine

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burdisso, Ricardo A.; Fuller, Christopher R.; O'Brien, Walter F.

    1993-01-01

    A three channel active control system is applied to an operational turbofan engine in order to reduce tonal noise produced by both the fan and high pressure compressor. The control approach is the feedforward filtered-x least-mean-square algorithm implemented on a digital signal processing board. Reference transducers mounted on the engine case provides blade passing and harmonics frequency information to the controller. Error information is provided by large area microphones placed in the acoustic far field. In order to minimize the error signal, the controller actuates loudspeakers mounted on the inlet to produce destructive interference. The sound pressure level of the fundamental tone of the fan was reduced using the three channel controller by up to 16 dB over a 60 deg angle about the engine axis. A single channel controller could produce reduction over a 30 deg angle. The experimental results show the control to be robust. Simultaneous control of two tones is done with parallel controllers. The fundamental and the first harmonic tones of the fan were controlled simultaneously with reductions of 12 dBA and 5 dBA, respectively, measured on the engine axis. Simultaneous control was also demonstrated for the fan fundamental and the high pressure compressor fundamental tones.

  16. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  17. Discovery of error-tolerant biclusters from noisy gene expression data.

    PubMed

    Gupta, Rohit; Rao, Navneet; Kumar, Vipin

    2011-11-24

    An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.

  18. Darwinian demons, evolutionary complexity, and information maximization.

    PubMed

    Krakauer, David C

    2011-09-01

    Natural selection is shown to be an extended instance of a Maxwell's demon device. A demonic selection principle is introduced that states that organisms cannot exceed the complexity of their selective environment. Thermodynamic constraints on error repair impose a fundamental limit to the rate that information can be transferred from the environment (via the selective demon) to the genome. Evolved mechanisms of learning and inference can overcome this limitation, but remain subject to the same fundamental constraint, such that plastic behaviors cannot exceed the complexity of reward signals. A natural measure of evolutionary complexity is provided by mutual information, and niche construction activity--the organismal contribution to the construction of selection pressures--might in principle lead to its increase, bounded by thermodynamic free energy required for error correction.

  19. Beyond the Total Score: A Preliminary Investigation into the Types of Phonological Awareness Errors Made by First Graders

    ERIC Educational Resources Information Center

    Hayward, Denyse V.; Annable, Caitlin D.; Fung, Jennifer E.; Williamson, Robert D.; Lovell-Johnston, Meridith A.; Phillips, Linda M.

    2017-01-01

    Current phonological awareness assessment procedures consider only the total score a child achieves. Such an approach may result in children who achieve the same total score receiving the same instruction even though the configuration of their errors represent fundamental knowledge differences. The purpose of this study was to develop a tool for…

  20. Associative Processes in Intuitive Judgment

    PubMed Central

    Morewedge, Carey K.; Kahneman, Daniel

    2014-01-01

    Dual-system models of reasoning attribute errors of judgment to two failures. The automatic operations of a “System 1” generate a faulty intuition, which the controlled operations of a “System 2” fail to detect and correct. We identify System 1 with the automatic operations of associative memory and draw on research in the priming paradigm to describe how it operates. We explain how three features of associative memory—associative coherence, attribute substitution, and processing fluency—give rise to major biases of intuitive judgment. Our article highlights both the ability of System 1 to create complex and skilled judgments and the role of the system as a source of judgment errors. PMID:20696611

  1. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  2. Foreign Military Advisor Proficiency: The Need for Screening, Selection and Qualification

    DTIC Science & Technology

    2009-06-12

    System BDE Brigade BTT Border Transition Team CALL Center for Army Lessons Learned CAT Collection and Analysis Team CF Coalition Forces CGSS...all Soldiers are suitable for advisor duty. Personnel selection is a fundamental requirement. Despite the importance of the mission, there is... fundamental issue arises from literature. Unique Soldier attributes and leadership may be required to attain and maintain cohesion on foreign military

  3. The effects of training on errors of perceived direction in perspective displays

    NASA Technical Reports Server (NTRS)

    Tharp, Gregory K.; Ellis, Stephen R.

    1990-01-01

    An experiment was conducted to determine the effects of training on the characteristic direction errors that are observed when subjects estimate exocentric directions on perspective displays. Changes in five subjects' perceptual errors were measured during a training procedure designed to eliminate the error. The training was provided by displaying to each subject both the sign and the direction of his judgment error. The feedback provided by the error display was found to decrease but not eliminate the error. A lookup table model of the source of the error was developed in which the judgement errors were attributed to overestimates of both the pitch and the yaw of the viewing direction used to produce the perspective projection. The model predicts the quantitative characteristics of the data somewhat better than previous models did. A mechanism is proposed for the observed learning, and further tests of the model are suggested.

  4. Estimating two-way tables based on forest surveys

    Treesearch

    Charles T. Scott

    2000-01-01

    Forest survey analysts usually are interested in tables of values rather than single point estimates. A common error is to include only plots on which nonzero values of the attribute were observed when computing the variance of a mean. Similarly, analysts often exclude nonforest plots from the analysis. The development of the correct estimates of forest area, attribute...

  5. Estimating the Uncertainty In Diameter Growth Model Predictions and Its Effects On The Uncertainty of Annual Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    2001-01-01

    Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...

  6. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  7. Reversal of photon-scattering errors in atomic qubits.

    PubMed

    Akerman, N; Kotler, S; Glickman, Y; Ozeri, R

    2012-09-07

    Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.

  8. Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. CRM/HF Conference, Held in Denver, Colorado on April 16-17, 2006

    NASA Technical Reports Server (NTRS)

    Dismukes, Key; Berman, Ben; Loukopoulos, Loukisa

    2007-01-01

    Reviewed NTSB reports of the 19 U.S. airline accidents between 1991-2000 attributed primarily to crew error. Asked: Why might any airline crew in situation of accident crew--knowing only what they knew--be vulnerable. Can never know with certainty why accident crew made specific errors but can determine why the population of pilots is vulnerable. Considers variability of expert performance as function of interplay of multiple factors.

  9. Culture, attribution and automaticity: a social cognitive neuroscience view

    PubMed Central

    Morris, Michael W.

    2010-01-01

    A fundamental challenge facing social perceivers is identifying the cause underlying other people’s behavior. Evidence indicates that East Asian perceivers are more likely than Western perceivers to reference the social context when attributing a cause to a target person’s actions. One outstanding question is whether this reflects a culture’s influence on automatic or on controlled components of causal attribution. After reviewing behavioral evidence that culture can shape automatic mental processes as well as controlled reasoning, we discuss the evidence in favor of cultural differences in automatic and controlled components of causal attribution more specifically. We contend that insights emerging from social cognitive neuroscience research can inform this debate. After introducing an attribution framework popular among social neuroscientists, we consider findings relevant to the automaticity of attribution, before speculating how one could use a social neuroscience approach to clarify whether culture affects automatic, controlled or both types of attribution processes. PMID:20460302

  10. Developing an eLearning tool formalizing in YAWL the guidelines used in a transfusion medicine service.

    PubMed

    Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana

    2012-01-01

    The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.

  11. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    PubMed

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing information on dosage, pharmacological interactions, side effects and contraindications of medications.The major challenges for quality and risk management, for the heads of departments and the executive board is the implementation and support of the described actions and a sustained guidance of the staff involved in the modification management process. The global trigger tool is suitable for improving transparency and objectifying the frequency of medical errors.

  12. High-Resolution X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    ODell, Stephen L.; Brissenden, Roger J.; Davis, William; Elsner, Ronald F.; Elvis, Martin; Freeman, Mark; Gaetz, Terry; Gorenstein, Paul; Gubarev, Mikhail V.

    2010-01-01

    Fundamental needs for future x-ray telescopes: a) Sharp images => excellent angular resolution. b) High throughput => large aperture areas. Generation-X optics technical challenges: a) High resolution => precision mirrors & alignment. b) Large apertures => lots of lightweight mirrors. Innovation needed for technical readiness: a) 4 top-level error terms contribute to image size. b) There are approaches to controlling those errors. Innovation needed for manufacturing readiness. Programmatic issues are comparably challenging.

  13. Issues central to a useful image understanding environment

    NASA Astrophysics Data System (ADS)

    Beveridge, J. Ross; Draper, Bruce A.; Hanson, Allen R.; Riseman, Edward M.

    1992-04-01

    A recent DARPA initiative has sparked interested in software environments for computer vision. The goal is a single environment to support both basic research and technology transfer. This paper lays out six fundamental attributes such a system must possess: (1) support for both C and Lisp, (2) extensibility, (3) data sharing, (4) data query facilities tailored to vision, (5) graphics, and (6) code sharing. The first three attributes fundamentally constrain the system design. Support for both C and Lisp demands some form of database or data-store for passing data between languages. Extensibility demands that system support facilities, such as spatial retrieval of data, be readily extended to new user-defined datatypes. Finally, data sharing demands that data saved by one user, including data of a user-defined type, must be readable by another user.

  14. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  15. Accuracy of emotion labeling in children of parents diagnosed with bipolar disorder.

    PubMed

    Hanford, Lindsay C; Sassi, Roberto B; Hall, Geoffrey B

    2016-04-01

    Emotion labeling deficits have been posited as an endophenotype for bipolar disorder (BD) as they have been observed in both patients and their first-degree relatives. It remains unclear whether these deficits exist secondary to the development of psychiatric symptoms or whether they can be attributed to risk for psychopathology. To explore this, we investigated emotion processing in symptomatic and asymptomatic high-risk bipolar offspring (HRO) and healthy children of healthy parents (HCO). Symptomatic (n:18, age: 13.8 ± 2.6 years, 44% female) and asymptomatic (n:12, age: 12.8 ± 3.0 years, 42% female) HRO and age- and sex-matched HCO (n:20, age: 13.3 ± 2.5 years, 45% female) performed an emotion-labeling task. Total number of errors, emotion category and intensity of emotion error scores were compared. Correlations between total error scores and symptom severity were also investigated. Compared to HCO, both HRO groups made more errors on the adult face task (pcor=0.014). The HRO group were 2.3 times [90%CI:0.9-6.3] more likely and 4.3 times [90%CI:1.3-14.3] more likely to make errors on sad and angry faces, respectively. With the exception of sad face type errors, we observed no significant differences in error patterns between symptomatic and asymptomatic HRO, and no correlations between symptom severity and total number of errors. This study was cross-sectional in design, limiting our ability to infer trajectories or heritability of these deficits. This study provides further support for emotion labeling deficits as a candidate endophenotype for BD. Our study also suggests these deficits are not attributable to the presence of psychiatric symptoms. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    NASA Astrophysics Data System (ADS)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  17. Who Believes in the Storybook Image of the Scientist?

    PubMed

    Veldkamp, Coosje L S; Hartgerink, Chris H J; van Assen, Marcel A L M; Wicherts, Jelte M

    2017-01-01

    Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the "storybook image of the scientist" is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than to other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and Ph.D. students, and higher levels to Ph.D. students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one's own group than to people in other groups may decrease scientists' willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science.

  18. Who Believes in the Storybook Image of the Scientist?

    PubMed Central

    Veldkamp, Coosje L. S.; Hartgerink, Chris H. J.; van Assen, Marcel A. L. M.; Wicherts, Jelte M.

    2017-01-01

    ABSTRACT Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the “storybook image of the scientist” is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than to other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and Ph.D. students, and higher levels to Ph.D. students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one’s own group than to people in other groups may decrease scientists’ willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science. PMID:28001440

  19. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  20. Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.

    PubMed

    Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing

    2016-01-01

    The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.

  1. Heuristic-driven graph wavelet modeling of complex terrain

    NASA Astrophysics Data System (ADS)

    Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François

    2015-03-01

    We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.

  2. Architectural elements of hybrid navigation systems for future space transportation

    NASA Astrophysics Data System (ADS)

    Trigo, Guilherme F.; Theil, Stephan

    2018-06-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  3. Simulations in site error estimation for direction finders

    NASA Astrophysics Data System (ADS)

    López, Raúl E.; Passi, Ranjit M.

    1991-08-01

    The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.

  4. Adverse Drug Events caused by Serious Medication Administration Errors

    PubMed Central

    Sawarkar, Abhivyakti; Keohane, Carol A.; Maviglia, Saverio; Gandhi, Tejal K; Poon, Eric G

    2013-01-01

    OBJECTIVE To determine how often serious or life-threatening medication administration errors with the potential to cause patient harm (or potential adverse drug events) result in actual patient harm (or adverse drug events (ADEs)) in the hospital setting. DESIGN Retrospective chart review of clinical events that transpired following observed medication administration errors. BACKGROUND Medication errors are common at the medication administration stage for hospitalized patients. While many of these errors are considered capable of causing patient harm, it is not clear how often patients are actually harmed by these errors. METHODS In a previous study where 14,041 medication administrations in an acute-care hospital were directly observed, investigators discovered 1271 medication administration errors, of which 133 had the potential to cause serious or life-threatening harm to patients and were considered serious or life-threatening potential ADEs. In the current study, clinical reviewers conducted detailed chart reviews of cases where a serious or life-threatening potential ADE occurred to determine if an actual ADE developed following the potential ADE. Reviewers further assessed the severity of the ADE and attribution to the administration error. RESULTS Ten (7.5% [95% C.I. 6.98, 8.01]) actual adverse drug events or ADEs resulted from the 133 serious and life-threatening potential ADEs, of which 6 resulted in significant, three in serious, and one life threatening injury. Therefore 4 (3% [95% C.I. 2.12, 3.6]) serious and life threatening potential ADEs led to serious or life threatening ADEs. Half of the ten actual ADEs were caused by dosage or monitoring errors for anti-hypertensives. The life threatening ADE was caused by an error that was both a transcription and a timing error. CONCLUSION Potential ADEs at the medication administration stage can cause serious patient harm. Given previous estimates of serious or life-threatening potential ADE of 1.33 per 100 medication doses administered, in a hospital where 6 million doses are administered per year, about 4000 preventable ADEs would be attributable to medication administration errors annually. PMID:22791691

  5. Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification.

    PubMed

    Su, Chi; Yang, Fan; Zhang, Shiliang; Tian, Qi; Davis, Larry Steven; Gao, Wen

    2018-05-01

    We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.

  6. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  7. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  8. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  9. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  10. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  11. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  12. A theoretical basis for the analysis of multiversion software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  13. The Forbidden World of Off the Record Negotiating for Successful Air Force Media Engagements

    DTIC Science & Technology

    2012-02-15

    reporters choose OTR as a first option. Since all parties perform a mental calculus of risks versus rewards before agreeing to OTR, the author...experience, calling OTR “the most often misunderstood” attribution category.15 One widely used journalism textbook says only that OTR is information... textbook , used at Columbia Journalism School, pays scant discussion to such a fundamental topic. In the 596-page textbook , discussion about attribution

  14. Is Single-Port Laparoscopy More Precise and Faster with the Robot?

    PubMed

    Fransen, Sofie A F; van den Bos, Jacqueline; Stassen, Laurents P S; Bouvy, Nicole D

    2016-11-01

    Single-port laparoscopy is a step forward toward nearly scar less surgery. Concern has been raised that single-incision laparoscopic surgery (SILS) is technically more challenging because of the lack of triangulation and the clashing of instruments. Robotic single-incision laparoscopic surgery (RSILS) in chopstick setting might overcome these problems. This study evaluated the outcome in time and errors of two tasks of the Fundamentals of Laparoscopic Surgery on a dry platform, in two settings: SILS versus RSILS. Nine experienced laparoscopic surgeons performed two tasks: peg transfer and a suturing task, on a standard box trainer. All participants practiced each task three times in both settings: SILS and a RSILS setting. The assessment scores (time and errors) were recorded. For the first task of peg transfer, RSILS was significantly better in time (124 versus 230 seconds, P = .0004) and errors (0.80 errors versus 2.60 errors, P = .024) at the first run, compared to the SILS setting. At the third and final run, RSILS still proved to be significantly better in errors (0.10 errors versus 0.80 errors, P = .025) compared to the SILS group. RSILS was faster in the third run, but not significant (116 versus 157 seconds, P = .08). For the second task, a suturing task, only 3 participants of the SILS group were able to perform this task within the set time frame of 600 seconds. There was no significant difference in time in the three runs between SILS and RSILS for the 3 participants that fulfilled both tasks within the 600 seconds. This study shows that robotic single-port surgery seems easier, faster, and more precise to perform basis tasks of the Fundamentals of laparoscopic surgery. For the more complex task of suturing, only the single-port robotic setting enabled all participants to fulfill this task, within the set time frame.

  15. Quantitative Assessment of Liver Fat with Magnetic Resonance Imaging and Spectroscopy

    PubMed Central

    Reeder, Scott B.; Cruite, Irene; Hamilton, Gavin; Sirlin, Claude B.

    2011-01-01

    Hepatic steatosis is characterized by abnormal and excessive accumulation of lipids within hepatocytes. It is an important feature of diffuse liver disease, and the histological hallmark of non-alcoholic fatty liver disease (NAFLD). Other conditions associated with steatosis include alcoholic liver disease, viral hepatitis, HIV and genetic lipodystrophies, cystic fibrosis liver disease, and hepatotoxicity from various therapeutic agents. Liver biopsy, the current clinical gold standard for assessment of liver fat, is invasive and has sampling errors, and is not optimal for screening, monitoring, clinical decision making, or well-suited for many types of research studies. Non-invasive methods that accurately and objectively quantify liver fat are needed. Ultrasound (US) and computed tomography (CT) can be used to assess liver fat but have limited accuracy as well as other limitations. Magnetic resonance (MR) techniques can decompose the liver signal into its fat and water signal components and therefore assess liver fat more directly than CT or US. Most magnetic resonance (MR) techniques measure the signal fat-fraction (the fraction of the liver MR signal attributable to liver fat), which may be confounded by numerous technical and biological factors and may not reliably reflect fat content. By addressing the factors that confound the signal fat-fraction, advanced MR techniques measure the proton density fat-fraction (the fraction of the liver proton density attributable to liver fat), which is a fundamental tissue property and a direct measure of liver fat content. These advanced techniques show promise for accurate fat quantification and are likely to be commercially available soon. PMID:22025886

  16. Perceptions of variability in facial emotion influence beliefs about the stability of psychological characteristics.

    PubMed

    Weisbuch, Max; Grunberg, Rebecca L; Slepian, Michael L; Ambady, Nalini

    2016-10-01

    Beliefs about the malleability versus stability of traits (incremental vs. entity lay theories) have a profound impact on social cognition and self-regulation, shaping phenomena that range from the fundamental attribution error and group-based stereotyping to academic motivation and achievement. Less is known about the causes than the effects of these lay theories, and in the current work the authors examine the perception of facial emotion as a causal influence on lay theories. Specifically, they hypothesized that (a) within-person variability in facial emotion signals within-person variability in traits and (b) social environments replete with within-person variability in facial emotion encourage perceivers to endorse incremental lay theories. Consistent with Hypothesis 1, Study 1 participants were more likely to attribute dynamic (vs. stable) traits to a person who exhibited several different facial emotions than to a person who exhibited a single facial emotion across multiple images. Hypothesis 2 suggests that social environments support incremental lay theories to the extent that they include many people who exhibit within-person variability in facial emotion. Consistent with Hypothesis 2, participants in Studies 2-4 were more likely to endorse incremental theories of personality, intelligence, and morality after exposure to multiple individuals exhibiting within-person variability in facial emotion than after exposure to multiple individuals exhibiting a single emotion several times. Perceptions of within-person variability in facial emotion-rather than perceptions of simple diversity in facial emotion-were responsible for these effects. Discussion focuses on how social ecologies shape lay theories. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. 'Proactive' use of cue-context congruence for building reinforcement learning's reward function.

    PubMed

    Zsuga, Judit; Biro, Klara; Tajti, Gabor; Szilasi, Magdolna Emma; Papp, Csaba; Juhasz, Bela; Gesztelyi, Rudolf

    2016-10-28

    Reinforcement learning is a fundamental form of learning that may be formalized using the Bellman equation. Accordingly an agent determines the state value as the sum of immediate reward and of the discounted value of future states. Thus the value of state is determined by agent related attributes (action set, policy, discount factor) and the agent's knowledge of the environment embodied by the reward function and hidden environmental factors given by the transition probability. The central objective of reinforcement learning is to solve these two functions outside the agent's control either using, or not using a model. In the present paper, using the proactive model of reinforcement learning we offer insight on how the brain creates simplified representations of the environment, and how these representations are organized to support the identification of relevant stimuli and action. Furthermore, we identify neurobiological correlates of our model by suggesting that the reward and policy functions, attributes of the Bellman equitation, are built by the orbitofrontal cortex (OFC) and the anterior cingulate cortex (ACC), respectively. Based on this we propose that the OFC assesses cue-context congruence to activate the most context frame. Furthermore given the bidirectional neuroanatomical link between the OFC and model-free structures, we suggest that model-based input is incorporated into the reward prediction error (RPE) signal, and conversely RPE signal may be used to update the reward-related information of context frames and the policy underlying action selection in the OFC and ACC, respectively. Furthermore clinical implications for cognitive behavioral interventions are discussed.

  18. Genomic dark matter: the reliability of short read mapping illustrated by the genome mappability score.

    PubMed

    Lee, Hayan; Schatz, Michael C

    2012-08-15

    Genome resequencing and short read mapping are two of the primary tools of genomics and are used for many important applications. The current state-of-the-art in mapping uses the quality values and mapping quality scores to evaluate the reliability of the mapping. These attributes, however, are assigned to individual reads and do not directly measure the problematic repeats across the genome. Here, we present the Genome Mappability Score (GMS) as a novel measure of the complexity of resequencing a genome. The GMS is a weighted probability that any read could be unambiguously mapped to a given position and thus measures the overall composition of the genome itself. We have developed the Genome Mappability Analyzer to compute the GMS of every position in a genome. It leverages the parallelism of cloud computing to analyze large genomes, and enabled us to identify the 5-14% of the human, mouse, fly and yeast genomes that are difficult to analyze with short reads. We examined the accuracy of the widely used BWA/SAMtools polymorphism discovery pipeline in the context of the GMS, and found discovery errors are dominated by false negatives, especially in regions with poor GMS. These errors are fundamental to the mapping process and cannot be overcome by increasing coverage. As such, the GMS should be considered in every resequencing project to pinpoint the 'dark matter' of the genome, including of known clinically relevant variations in these regions. The source code and profiles of several model organisms are available at http://gma-bio.sourceforge.net

  19. ["Second victim" - error, crises and how to get out of it].

    PubMed

    von Laue, N; Schwappach, D; Hochreutener, M

    2012-06-01

    Medical errors do not only harm patients ("first victims"). Almost all health care professionals become a so-called "second victim" once in their career by being involved in a medical error. Studies show that error involvement can have a tremendous impact on health care workers leading to burnout, depression and professional crisis. Moreover persons involved in errors show a decline in job performance and jeopardize therefore patient safety. Blaming the person is one of the typical psychological reactions after an error happened as the attribution theory tells. The self-esteem gets stabilized if we can put blame on someone and pick out a scapegoat. But standing alone makes the emotional situation even worse. A vicious circle can evolve with tragic effect for the individual and negative implications for patient safety and the health care setting.

  20. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.

    Many numerical weather prediction (NWP) and climate models exhibit too warm lower tropospheres near the mid-latitude continents. This warm bias has been extensively studied before, but evidence about its origin remains inconclusive. Some studies point to deficiencies in the deep convective or low clouds. Other studies found an important contribution from errors in the land surface properties. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. Documenting these radiation errors is hence an important step towards understanding and alleviating themore » warm bias. This paper presents an attribution study to quantify the net radiation biases in 9 model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, integrated water vapor (IWV) and aerosols are quantified, using an array of radiation measurement stations near the ARM SGP site. Furthermore, an in depth-analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface SW radiation is overestimated (LW underestimated) in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation in all but one model, which has a dominant albedo issue. Using a cloud regime analysis, it was shown that missing deep cloud events and/or simulating deep clouds with too weak cloud-radiative effects account for most of these cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud, but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly however, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, the deep cloud problem in many models could be related to too weak convective cloud detrainment and too large precipitation efficiencies. This does not rule out that previously documented issues with the evaporative fraction contribute to the warm bias as well, since the majority of the models underestimate the surface rain rates overall, as they miss the observed large nocturnal precipitation peak.« less

  1. Low-stress bicycling and network connectivity.

    DOT National Transportation Integrated Search

    2012-05-01

    For a bicycling network to attract the widest possible segment of the population, its most fundamental attribute should be low-stress connectivity, that is, providing routes between peoples origins and destinations that do not require cyclists to ...

  2. Improving the quality of cognitive screening assessments: ACEmobile, an iPad-based version of the Addenbrooke's Cognitive Examination-III.

    PubMed

    Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F

    2018-01-01

    Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n  = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.

  3. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  4. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  5. Subversion: The Neglected Aspect of Computer Security.

    DTIC Science & Technology

    1980-06-01

    fundamentally flawed. Recall from mathematics that it is sufficient to disprove a4 proposition (e.g., that a system is secure) by showing only one example where...made. This lack of protection is one of the fundamental reasons why the subversion of computer systems can be so effective. Later chapters will amplify...an area of code that will not be liable to revision. Operatine system software, as pointed out earlier, is often riddled with design errors or subject

  6. Active control of fan noise from a turbofan engine

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burdisso, Ricardo A.; Fuller, Christopher R.; O'Brien, Walter F.

    1994-01-01

    A three-channel active control system is applied to an operational turbofan engine to reduce tonal noise produced by both the fan and the high-pressure compressor. The control approach is the feedforward filtered-x least-mean-square algorithm implemented on a digital signal processing board. Reference transducers mounted on the engine case provide blade passing and harmonics frequency information to the controller. Error information is provided by large area microphones placed in the acoustic far field. To minimize the error signal, the controller actuates loudspeakers mounted on the inlet to produce destructive interference. The sound pressure level of the fundamental tone of the fan was reduced using the three-channel controller by up to 16 dB over a +/- 30-deg angle about the engine axis. A single-channel controller could produce reduction over a +/- 15-deg angle. The experimental results show the control to be robust. Outside of the areas contolled, the levels of the tone actually increased due to the generation of radial modes by the control sources. Simultaneous control of two tones is achieved with parallel controllers. The fundamental and the first harmonic tones of the fan were controlled simultaneously with reductions of 12 and 5 dBA, respectively, measured on the engine axis. Simultaneous control was also demonstrated for the fan fundamental and the high-pressure compressor fundamental tones.

  7. Error, blame, and the law in health care--an antipodean perspective.

    PubMed

    Runciman, William B; Merry, Alan F; Tito, Fiona

    2003-06-17

    Patients are frequently harmed by problems arising from the health care process itself. Addressing these problems requires understanding the role of errors, violations, and system failures in their genesis. Problem-solving is inhibited by a tendency to blame those involved, often inappropriately. This has been aggravated by the need to attribute blame before compensation can be obtained through tort and the human failing of attributing blame simply because there has been a serious outcome. Blaming and punishing for errors that are made by well-intentioned people working in the health care system drives the problem of iatrogenic harm underground and alienates people who are best placed to prevent such problems from recurring. On the other hand, failure to assign blame when it is due is also undesirable and erodes trust in the medical profession. Understanding the distinction between blameworthy behavior and inevitable human errors and appreciating the systemic factors that underlie most failures in complex systems are essential for the response to a harmed patient to be informed, fair, and effective in improving safety. It is important to meet society's needs to blame and exact retribution when appropriate. However, this should not be a prerequisite for compensation, which should be appropriately structured, fair, timely, and, ideally, properly funded as an intrinsic part of health care and social security systems.

  8. Residue-Specific α-Helix Propensities from Molecular Simulation

    PubMed Central

    Best, Robert B.; de Sancho, David; Mittal, Jeetain

    2012-01-01

    Formation of α-helices is a fundamental process in protein folding and assembly. By studying helix formation in molecular simulations of a series of alanine-based peptides, we obtain the temperature-dependent α-helix propensities of all 20 naturally occurring residues with two recent additive force fields, Amber ff03w and Amber ff99SB∗. Encouragingly, we find that the overall helix propensity of many residues is captured well by both energy functions, with Amber ff99SB∗ being more accurate. Nonetheless, there are some residues that deviate considerably from experiment, which can be attributed to two aspects of the energy function: i), variations of the charge model used to determine the atomic partial charges, with residues whose backbone charges differ most from alanine tending to have the largest error; ii), side-chain torsion potentials, as illustrated by the effect of modifications to the torsion angles of I, L, D, N. We find that constrained refitting of residue charges for charged residues in Amber ff99SB∗ significantly improves their helix propensity. The resulting parameters should more faithfully reproduce helix propensities in simulations of protein folding and disordered proteins. PMID:22455930

  9. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  10. Fault Modeling of Extreme Scale Applications Using Machine Learning

    DOE PAGES

    Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...

    2016-05-01

    Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less

  11. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  12. Sensor Technologies for Particulate Detection and Characterization

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.

    2008-01-01

    Planned Lunar missions have resulted in renewed attention to problems attributable to fine particulates. While the difficulties experienced during the sequence of Apollo missions did not prove critical in all cases, the comparatively long duration of impending missions may present a different situation. This situation creates the need for a spectrum of particulate sensing technologies. From a fundamental perspective, an improved understanding of the properties of the dust fraction is required. Described here is laboratory-based reference instrumentation for the measurement of fundamental particle size distribution (PSD) functions from 2.5 nanometers to 20 micrometers. Concomitant efforts for separating samples into fractional size bins are also presented. A requirement also exists for developing mission compatible sensors. Examples include provisions for air quality monitoring in spacecraft and remote habitation modules. Required sensor attributes such as low mass, volume, and power consumption, autonomy of operation, and extended reliability cannot be accommodated by existing technologies.

  13. Hessian matrix approach for determining error field sensitivity to coil deviations

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  14. Risk behaviours for organism transmission in health care delivery-A two month unstructured observational study.

    PubMed

    Lindberg, Maria; Lindberg, Magnus; Skytt, Bernice

    2017-05-01

    Errors in infection control practices risk patient safety. The probability for errors can increase when care practices become more multifaceted. It is therefore fundamental to track risk behaviours and potential errors in various care situations. The aim of this study was to describe care situations involving risk behaviours for organism transmission that could lead to subsequent healthcare-associated infections. Unstructured nonparticipant observations were performed at three medical wards. Healthcare personnel (n=27) were shadowed, in total 39h, on randomly selected weekdays between 7:30 am and 12 noon. Content analysis was used to inductively categorize activities into tasks and based on the character into groups. Risk behaviours for organism transmission were deductively classified into types of errors. Multiple response crosstabs procedure was used to visualize the number and proportion of errors in tasks. One-Way ANOVA with Bonferroni post Hoc test was used to determine differences among the three groups of activities. The qualitative findings gives an understanding of that risk behaviours for organism transmission goes beyond the five moments of hand hygiene and also includes the handling and placement of materials and equipment. The tasks with the highest percentage of errors were; 'personal hygiene', 'elimination' and 'dressing/wound care'. The most common types of errors in all identified tasks were; 'hand disinfection', 'glove usage', and 'placement of materials'. Significantly more errors (p<0.0001) were observed the more multifaceted (single, combined or interrupted) the activity was. The numbers and types of errors as well as the character of activities performed in care situations described in this study confirm the need to improve current infection control practices. It is fundamental that healthcare personnel practice good hand hygiene however effective preventive hygiene is complex in healthcare activities due to the multifaceted care situations, especially when activities are interrupted. A deeper understanding of infection control practices that goes beyond the sense of security by means of hand disinfection and use of gloves is needed as materials and surfaces in the care environment might be contaminated and thus pose a risk for organism transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Assessing student understanding of measurement and uncertainty

    NASA Astrophysics Data System (ADS)

    Jirungnimitsakul, S.; Wattanakasiwich, P.

    2017-09-01

    The objectives of this study were to develop and assess student understanding of measurement and uncertainty. A test has been adapted and translated from the Laboratory Data Analysis Instrument (LDAI) test, consists of 25 questions focused on three topics including measures of central tendency, experimental errors and uncertainties, and fitting regression lines. The test was evaluated its content validity by three physics experts in teaching physics laboratory. In the pilot study, Thai LDAI was administered to 93 freshmen enrolled in a fundamental physics laboratory course. The final draft of the test was administered to three groups—45 freshmen taking fundamental physics laboratory, 16 sophomores taking intermediated physics laboratory and 21 juniors taking advanced physics laboratory at Chiang Mai University. As results, we found that the freshmen had difficulties in experimental errors and uncertainties. Most students had problems with fitting regression lines. These results will be used to improve teaching and learning physics laboratory for physics students in the department.

  16. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  17. Rank and independence in contingency table

    NASA Astrophysics Data System (ADS)

    Tsumoto, Shusaku

    2004-04-01

    A contingency table summarizes the conditional frequencies of two attributes and shows how these two attributes are dependent on each other. Thus, this table is a fundamental tool for pattern discovery with conditional probabilities, such as rule discovery. In this paper, a contingency table is interpreted from the viewpoint of statistical independence and granular computing. The first important observation is that a contingency table compares two attributes with respect to the number of equivalence classes. For example, a n x n table compares two attributes with the same granularity, while a m x n(m >= n) table compares two attributes with different granularities. The second important observation is that matrix algebra is a key point of analysis of this table. Especially, the degree of independence, rank plays a very important role in evaluating the degree of statistical independence. Relations between rank and the degree of dependence are also investigated.

  18. Air Force Operational Test and Evaluation Center, Volume 2, Number 2

    DTIC Science & Technology

    1988-01-01

    the special class of attributes arc recorded, cost or In place of the normalization ( I). we propose beliefit. the lollowins normalization NUMERICAL ...comprchcnsi\\c set of modular basic data flow to meet requirements at test tools ,. designed to provide flexible data reduction start, then building to...possible. a totlinaion ot the two position error measurement techniques arc used SLR is a methd of fitting a linear model o accumlulate a position error

  19. Situation and person attributions under spontaneous and intentional instructions: an fMRI study

    PubMed Central

    Kestemont, Jenny; Vandekerckhove, Marie; Ma, Ning; Van Hoeck, Nicole

    2013-01-01

    This functional magnetic resonance imaging (fMRI) research explores how observers make causal beliefs about an event in terms of the person or situation. Thirty-four participants read various short descriptions of social events that implied either the person or the situation as the cause. Half of them were explicitly instructed to judge whether the event was caused by something about the person or the situation (intentional inferences), whereas the other half was instructed simply to read the material carefully (spontaneous inferences). The results showed common activation in areas related to mentalizing, across all types of causes or instructions (posterior superior temporal sulcus, temporo-parietal junction, precuneus). However, the medial prefrontal cortex was activated only under spontaneous instructions, but not under intentional instruction. This suggests a bias toward person attributions (e.g. fundamental attribution bias). Complementary to this, intentional situation attributions activated a stronger and more extended network compared to intentional person attributions, suggesting that situation attributions require more controlled, extended and broader processing of the information. PMID:22345370

  20. Simulating a transmon implementation of the surface code, Part II

    NASA Astrophysics Data System (ADS)

    O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo

    The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  1. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    PubMed

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  2. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  3. Using wide area differential GPS to improve total system error for precision flight operations

    NASA Astrophysics Data System (ADS)

    Alter, Keith Warren

    Total System Error (TSE) refers to an aircraft's total deviation from the desired flight path. TSE can be divided into Navigational System Error (NSE), the error attributable to the aircraft's navigation system, and Flight Technical Error (FTE), the error attributable to pilot or autopilot control. Improvement in either NSE or FTE reduces TSE and leads to the capability to fly more precise flight trajectories. The Federal Aviation Administration's Wide Area Augmentation System (WAAS) became operational for non-safety critical applications in 2000 and will become operational for safety critical applications in 2002. This navigation service will provide precise 3-D positioning (demonstrated to better than 5 meters horizontal and vertical accuracy) for civil aircraft in the United States. Perhaps more importantly, this navigation system, which provides continuous operation across large regions, enables new flight instrumentation concepts which allow pilots to fly aircraft significantly more precisely, both for straight and curved flight paths. This research investigates the capabilities of some of these new concepts, including the Highway-In-The Sky (HITS) display, which not only improves FTE but also reduces pilot workload when compared to conventional flight instrumentation. Augmentation to the HITS display, including perspective terrain and terrain alerting, improves pilot situational awareness. Flight test results from demonstrations in Juneau, AK, and Lake Tahoe, CA, provide evidence of the overall feasibility of integrated, low-cost flight navigation systems based on these concepts. These systems, requiring no more computational power than current-generation low-end desktop computers, have immediate applicability to general aviation flight from Cessnas to business jets and can support safer and ultimately more economical flight operations. Commercial airlines may also, over time, benefit from these new technologies.

  4. Generalized site occupancy models allowing for false positive and false negative errors

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2006-01-01

    Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.

  5. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  7. Relative entropy as a universal metric for multiscale errors

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2010-06-01

    We show that the relative entropy, Srel , suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.

  8. Relative entropy as a universal metric for multiscale errors.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2010-06-01

    We show that the relative entropy, Srel, suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.

  9. Implementation and simulations of the sphere solution in FAST

    NASA Astrophysics Data System (ADS)

    Murgolo, F. P.; Schirone, M. G.; Lattanzi, M.; Bernacca, P. L.

    1989-06-01

    The details of the implementation of the sphere solution software in the Fundamental Astronomy by Space Techniques (FAST) consortium, are described. The simulation results for realistic data sets, both with and without grid-step errors are given. Expected errors on the astrometric parameters of the primary stars and the precision of the reference great circle zero points, are provided as a function of mission duration. The design matrix, the diagrams of the context processor and the processors experimental results are given.

  10. 41 CFR 101-26.310 - Ordering errors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Exceptions may be granted on a case-by-case basis when GSA is in need of the material and extenuating.... However, when the condition is attributable to carrier negligence, subsequent credit allowed by GSA will...

  11. Executive Council lists and general practitioner files

    PubMed Central

    Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.

    1974-01-01

    An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588

  12. Misremembrance of options past: source monitoring and choice.

    PubMed

    Mather, M; Shafir, E; Johnson, M K

    2000-03-01

    This study reveals that when remembering past decisions, people engage in choice-supportive memory distortion. When asked to make memory attributions of options' features, participants made source-monitoring errors that supported their decisions. They tended to attribute, both correctly and incorrectly, more positive features to the option they had selected than to its competitor. In addition, they sometimes attributed, both correctly and incorrectly, more negative features to the nonselected option. This pattern of distortion may be beneficial to people's general well-being, reducing regret for options not taken. At the same time, it is problematic for memory accuracy, for accountability, and for learning from past experience.

  13. A better way to deliver bad news.

    PubMed

    Manzoni, Jean-François

    2002-09-01

    In an ideal world, a subordinate would accept critical feedback from a manager with an open mind. He or she would ask a few clarifying questions, promise to work on certain performance areas, and show signs of improvement over time. But things don't always turn out that way. Such conversations can be unpleasant. Emotions can run high; tempers can flare. Fearing that the employee will become angry and defensive, the boss all too often inadvertently sabotages the meeting by preparing for it in a way that stifles honest discussion. This unintentional--indeed, unconscious--stress-induced habit makes it difficult to deliver corrective feedback effectively. Insead professor Jean-François Manzoni says that by changing the mind-set with which they develop and deliver negative feedback, managers can increase their odds of having productive conversations without damaging relationships. Manzoni describes two behavioral phenomena that color the feedback process--the fundamental attribution error and the false consensus effect--and uses real-world examples to demonstrate how bosses' critiques can go astray. Managers tend to frame difficult situations and decisions in a way that is narrow (alternatives aren't considered) and binary (there are only two possible outcomes--win or lose). And during the feedback discussion, managers' framing of the issues often remains frozen, regardless of the direction the conversation takes. Manzoni advises managers not to just settle on the first acceptable explanation for a behavior or situation they've witnessed. Bosses also need to consider an employee's circumstances rather than just attributing weak performance to a person's disposition. In short, delivering more effective feedback requires an open-minded approach, one that will convince employees that the process is fair and that the boss is ready for an honest conversation.

  14. Evolutionary Design of Controlled Structures

    NASA Technical Reports Server (NTRS)

    Masters, Brett P.; Crawley, Edward F.

    1997-01-01

    Basic physical concepts of structural delay and transmissibility are provided for simple rod and beam structures. Investigations show the sensitivity of these concepts to differing controlled-structures variables, and to rational system modeling effects. An evolutionary controls/structures design method is developed. The basis of the method is an accurate model formulation for dynamic compensator optimization and Genetic Algorithm based updating of sensor/actuator placement and structural attributes. One and three dimensional examples from the literature are used to validate the method. Frequency domain interpretation of these controlled structure systems provide physical insight as to how the objective is optimized and consequently what is important in the objective. Several disturbance rejection type controls-structures systems are optimized for a stellar interferometer spacecraft application. The interferometric designs include closed loop tracking optics. Designs are generated for differing structural aspect ratios, differing disturbance attributes, and differing sensor selections. Physical limitations in achieving performance are given in terms of average system transfer function gains and system phase loss. A spacecraft-like optical interferometry system is investigated experimentally over several different optimized controlled structures configurations. Configurations represent common and not-so-common approaches to mitigating pathlength errors induced by disturbances of two different spectra. Results show that an optimized controlled structure for low frequency broadband disturbances achieves modest performance gains over a mass equivalent regular structure, while an optimized structure for high frequency narrow band disturbances is four times better in terms of root-mean-square pathlength. These results are predictable given the nature of the physical system and the optimization design variables. Fundamental limits on controlled performance are discussed based on the measured and fit average system transfer function gains and system phase loss.

  15. Problems and pitfalls in cardiac drug therapy.

    PubMed

    Stone, S M; Rai, N; Nei, J

    2001-01-01

    Medical errors in the care of patients may account for 44,000 to 98,000 deaths per year, and 7,000 deaths per year are attributed to medication errors alone. Increasing awareness among health care providers of potential errors is a critical step toward improving the safety of medical care. Because today's medications are increasingly complex, approved at an accelerated rate, and often have a narrow therapeutic window with only a small margin of safety, patient and provider education is critical in assuring optimal therapeutic outcomes. Providers can use electronic resources such as Web sites to keep informed on drug-drug, drug-food, and drug-nutritional supplements interactions.

  16. Which Personality Attributes Are Most Important in the Workplace?

    PubMed

    Sackett, Paul R; Walmsley, Philip T

    2014-09-01

    Employees face a variety of work demands that place a premium on personal attributes, such as the degree to which they can be depended on to work independently, deal with stress, and interact positively with coworkers and customers. We examine evidence for the importance of these personality attributes using research strategies intended to answer three fundamental questions, including (a) how well does employees' standing on these attributes predict job performance?, (b) what types of attributes do employers seek to evaluate in interviews when considering applicants?, and (c) what types of attributes are rated as important for performance in a broad sampling of occupations across the U.S. economy? We summarize and integrate results from these three strategies using the Big Five personality dimensions as our organizing framework. Our findings indicate that personal attributes related to Conscientiousness and Agreeableness are important for success across many jobs, spanning across low to high levels of job complexity, training, and experience necessary to qualify for employment. The strategies lead to differing conclusions about the relative importance of Emotional Stability and Extraversion. We note implications for job seekers, for interventions aimed at changing standing on these attributes, and for employers. © The Author(s) 2014.

  17. Observations concerning the generation and propagation of Type III solar bursts

    NASA Technical Reports Server (NTRS)

    Kellogg, P. J.

    1986-01-01

    A number of Type III bursts were observed during the Helios missions in which the burst exciter passed over the spacecraft, as evidenced by strong electric field fluctuations near the plasma frequency. Six of these were suitable for detailed study. Of the six events, one was ambiguous, one showed what is interpreted as a switchover from harmonic to fundamental, and the rest all generated fundamental at onset. This would be expected if both fundamental and harmonic are generated, as, at a fixed frequency, the fundamental will be generated earlier. For the event which seems to show both fundamental and harmonic emission, the frequency ratio is not exactly 2. This is explained in terms of a time delay of the fundamental, due to scattering and diffusion in the source region. A time delay of the order of 600 seconds at 1 AU and 20 kHz, and inversely proportional to frequency, is required to explain the observations. Crude estimates show that delay times at least this long may be attributed to trapping and scattering.

  18. Neural markers of errors as endophenotypes in neuropsychiatric disorders

    PubMed Central

    Manoach, Dara S.; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach. PMID:23882201

  19. Neural markers of errors as endophenotypes in neuropsychiatric disorders.

    PubMed

    Manoach, Dara S; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach.

  20. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  1. Unassigned MS/MS Spectra: Who Am I?

    PubMed

    Pathan, Mohashin; Samuel, Monisha; Keerthikumar, Shivakumar; Mathivanan, Suresh

    2017-01-01

    Recent advances in high resolution tandem mass spectrometry (MS) has resulted in the accumulation of high quality data. Paralleled with these advances in instrumentation, bioinformatics software have been developed to analyze such quality datasets. In spite of these advances, data analysis in mass spectrometry still remains critical for protein identification. In addition, the complexity of the generated MS/MS spectra, unpredictable nature of peptide fragmentation, sequence annotation errors, and posttranslational modifications has impeded the protein identification process. In a typical MS data analysis, about 60 % of the MS/MS spectra remains unassigned. While some of these could attribute to the low quality of the MS/MS spectra, a proportion can be classified as high quality. Further analysis may reveal how much of the unassigned MS spectra attribute to search space, sequence annotation errors, mutations, and/or posttranslational modifications. In this chapter, the tools used to identify proteins and ways to assign unassigned tandem MS spectra are discussed.

  2. Examination of fundamental traffic characteristics and implications to ITS

    DOT National Transportation Integrated Search

    1998-01-01

    The purpose of this paper is to study and analyze the essential traffic characteristics of a 5.4-mile stretch of I-64-40 located within the St. Louis metropolitan area. The freeway experiences heavy congestion during peak periods attributed to factor...

  3. Exploring the social brain in schizophrenia: left prefrontal underactivation during mental state attribution.

    PubMed

    Russell, T A; Rubia, K; Bullmore, E T; Soni, W; Suckling, J; Brammer, M J; Simmons, A; Williams, S C; Sharma, T

    2000-12-01

    Evidence suggests that patients with schizophrenia have a deficit in "theory of mind," i.e., interpretation of the mental state of others. The authors used functional magnetic resonance imaging (MRI) to investigate the hypothesis that patients with schizophrenia have a dysfunction in brain regions responsible for mental state attribution. Mean brain activation in five male patients with schizophrenia was compared to that in seven comparison subjects during performance of a task involving attribution of mental state. During performance of the mental state attribution task, the patients made more errors and showed less blood-oxygen-level-dependent signal in the left inferior frontal gyrus. To the authors' knowledge, this is the first functional MRI study to show a deficit in the left prefrontal cortex in schizophrenia during a socioemotional task.

  4. Report of the 1988 2-D Intercomparison Workshop, chapter 3

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar

    1989-01-01

    Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.

  5. How Prediction Errors Shape Perception, Attention, and Motivation

    PubMed Central

    den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.

    2012-01-01

    Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610

  6. [Risk Management: concepts and chances for public health].

    PubMed

    Palm, Stefan; Cardeneo, Margareta; Halber, Marco; Schrappe, Matthias

    2002-01-15

    Errors are a common problem in medicine and occur as a result of a complex process involving many contributing factors. Medical errors significantly reduce the safety margin for the patient and contribute additional costs in health care delivery. In most cases adverse events cannot be attributed to a single underlying cause. Therefore an effective risk management strategy must follow a system approach, which is based on counting and analysis of near misses. The development of defenses against the undesired effects of errors should be the main focus rather than asking the question "Who blundered?". Analysis of near misses (which in this context can be compared to indicators) offers several methodological advantages as compared to the analysis of errors and adverse events. Risk management is an integral element of quality management.

  7. Autobiographical memory conjunction errors in younger and older adults: Evidence for a role of inhibitory ability

    PubMed Central

    Devitt, Aleea L.; Tippett, Lynette; Schacter, Daniel L.; Addis, Donna Rose

    2016-01-01

    Because of its reconstructive nature, autobiographical memory (AM) is subject to a range of distortions. One distortion involves the erroneous incorporation of features from one episodic memory into another, forming what are known as memory conjunction errors. Healthy aging has been associated with an enhanced susceptibility to conjunction errors for laboratory stimuli, yet it is unclear whether these findings translate to the autobiographical domain. We investigated the impact of aging on vulnerability to AM conjunction errors, and explored potential cognitive processes underlying the formation of these errors. An imagination recombination paradigm was used to elicit AM conjunction errors in young and older adults. Participants also completed a battery of neuropsychological tests targeting relational memory and inhibition ability. Consistent with findings using laboratory stimuli, older adults were more susceptible to AM conjunction errors than younger adults. However, older adults were not differentially vulnerable to the inflating effects of imagination. Individual variation in AM conjunction error vulnerability was attributable to inhibitory capacity. An inability to suppress the cumulative familiarity of individual AM details appears to contribute to the heightened formation of AM conjunction errors with age. PMID:27929343

  8. [Surveillance of health care errors. An overview of the published data in Argentina].

    PubMed

    Codermatz, Marcela A; Trillo, Carolina; Berenstein, Graciela; Ortiz, Zulma

    2006-01-01

    In the last decades, public health surveillance extended its scope of study to new fields, such as medical errors, in order to improve patient safety. This study was aimed to review all the evidence produced in Argentina about the surveillance of medical errors. An exhaustive literature search was performed. A total of 4656 abstracts were assessed (150 MEDLINE, 145 LILACS, and 4361 hand searched abstracts). Of them, 52 were analysed and 8 were considered relevant for health care error surveillance. Different approaches were used to study medical errors. Some of them have focused on patient safety and others on medical malpractice. There is still a need to improve the surveillance of this type of event. Mainly, the quality reports of study design and surveillance attributes were unclear. A critical appraisal and synthesis of all relevant studies on health care errors may help to understand not only the state of the art, but also to define research priorities.

  9. Wavefront-aberration measurement and systematic-error analysis of a high numerical-aperture objective

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin

    2018-02-01

    A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.

  10. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...

    2018-03-15

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  11. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  12. Elementary Concepts and Fundamental Laws of the Theory of Heat

    NASA Astrophysics Data System (ADS)

    de Oliveira, Mário J.

    2018-06-01

    The elementary concepts and fundamental laws concerning the science of heat are examined from the point of view of its development with special attention to its theoretical structure. The development is divided into four periods, each one characterized by the concept that was attributed to heat. The transition from one to the next period was marked by the emergence of new concepts and new laws, and by singular events. We point out that thermodynamics, as it emerged, is founded on the elementary concepts of temperature and adiabatic wall, and on the fundamental laws: Mayer-Joule principle, or law of conservation of energy; Carnot principle, which leads to the definition of entropy; and the Clausius principle, or law of increase in entropy.

  13. Elementary Concepts and Fundamental Laws of the Theory of Heat

    NASA Astrophysics Data System (ADS)

    de Oliveira, Mário J.

    2018-03-01

    The elementary concepts and fundamental laws concerning the science of heat are examined from the point of view of its development with special attention to its theoretical structure. The development is divided into four periods, each one characterized by the concept that was attributed to heat. The transition from one to the next period was marked by the emergence of new concepts and new laws, and by singular events. We point out that thermodynamics, as it emerged, is founded on the elementary concepts of temperature and adiabatic wall, and on the fundamental laws: Mayer-Joule principle, or law of conservation of energy; Carnot principle, which leads to the definition of entropy; and the Clausius principle, or law of increase in entropy.

  14. Increased hippocampal activation in ApoE-4 carriers and non-carriers with amnestic mild cognitive impairment.

    PubMed

    Tran, Tammy T; Speck, Caroline L; Pisupati, Aparna; Gallagher, Michela; Bakker, Arnold

    2017-01-01

    Increased fMRI activation in the hippocampus is recognized as a signature characteristic of the amnestic mild cognitive impairment (aMCI) stage of Alzheimer's disease (AD). Previous work has localized this increased activation to the dentate gyrus/CA3 subregion of the hippocampus and showed a correlation with memory impairments in those patients. Increased hippocampal activation has also been reported in carriers of the ApoE-4 allelic variation independently of mild cognitive impairment although these findings were not localized to a hippocampal subregion. To assess the ApoE-4 contribution to increased hippocampal fMRI activation, patients with aMCI genotyped for ApoE-4 status and healthy age-matched control participants completed a high-resolution fMRI scan while performing a memory task designed to tax hippocampal subregion specific functions. Consistent with previous reports, patients with aMCI showed increased hippocampal activation in the left dentate gyrus/CA3 region of the hippocampus as well as memory task errors attributable to this subregion. However, this increased fMRI activation in the hippocampus did not differ between ApoE-4 carriers and ApoE-4 non-carriers and the proportion of memory errors attributable to dentate gyrus/CA3 function did not differ between ApoE-4 carriers and ApoE-4 non-carriers. These results indicate that increased fMRI activation of the hippocampus observed in patients with aMCI is independent of ApoE-4 status and that ApoE-4 does not contribute to the dysfunctional hippocampal activation or the memory errors attributable to this subregion in these patients.

  15. Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: evidence for acoustic universals.

    PubMed

    Filippi, Piera; Congdon, Jenna V; Hoang, John; Bowling, Daniel L; Reber, Stephan A; Pašukonis, Andrius; Hoeschele, Marisa; Ocklenburg, Sebastian; de Boer, Bart; Sturdy, Christopher B; Newen, Albert; Güntürkün, Onur

    2017-07-26

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes-Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system. © 2017 The Author(s).

  16. Comparison of precision and speed in laparoscopic and robot-assisted surgical task performance.

    PubMed

    Zihni, Ahmed; Gerull, William D; Cavallo, Jaime A; Ge, Tianjia; Ray, Shuddhadeb; Chiu, Jason; Brunt, L Michael; Awad, Michael M

    2018-03-01

    Robotic platforms have the potential advantage of providing additional dexterity and precision to surgeons while performing complex laparoscopic tasks, especially for those in training. Few quantitative evaluations of surgical task performance comparing laparoscopic and robotic platforms among surgeons of varying experience levels have been done. We compared measures of quality and efficiency of Fundamentals of Laparoscopic Surgery task performance on these platforms in novices and experienced laparoscopic and robotic surgeons. Fourteen novices, 12 expert laparoscopic surgeons (>100 laparoscopic procedures performed, no robotics experience), and five expert robotic surgeons (>25 robotic procedures performed) performed three Fundamentals of Laparoscopic Surgery tasks on both laparoscopic and robotic platforms: peg transfer (PT), pattern cutting (PC), and intracorporeal suturing. All tasks were repeated three times by each subject on each platform in a randomized order. Mean completion times and mean errors per trial (EPT) were calculated for each task on both platforms. Results were compared using Student's t-test (P < 0.05 considered statistically significant). Among novices, greater errors were noted during laparoscopic PC (Lap 2.21 versus Robot 0.88 EPT, P < 0.001). Among expert laparoscopists, greater errors were noted during laparoscopic PT compared with robotic (PT: Lap 0.14 versus Robot 0.00 EPT, P = 0.04). Among expert robotic surgeons, greater errors were noted during laparoscopic PC compared with robotic (Lap 0.80 versus Robot 0.13 EPT, P = 0.02). Among expert laparoscopists, task performance was slower on the robotic platform compared with laparoscopy. In comparisons of expert laparoscopists performing tasks on the laparoscopic platform and expert robotic surgeons performing tasks on the robotic platform, expert robotic surgeons demonstrated fewer errors during the PC task (P = 0.009). Robotic assistance provided a reduction in errors at all experience levels for some laparoscopic tasks, but no benefit in the speed of task performance. Robotic assistance may provide some benefit in precision of surgical task performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Error Estimation of Pathfinder Version 5.3 SST Level 3C Using Three-way Error Analysis

    NASA Astrophysics Data System (ADS)

    Saha, K.; Dash, P.; Zhao, X.; Zhang, H. M.

    2017-12-01

    One of the essential climate variables for monitoring as well as detecting and attributing climate change, is Sea Surface Temperature (SST). A long-term record of global SSTs are available with observations obtained from ships in the early days to the more modern observation based on in-situ as well as space-based sensors (satellite/aircraft). There are inaccuracies associated with satellite derived SSTs which can be attributed to the errors associated with spacecraft navigation, sensor calibrations, sensor noise, retrieval algorithms, and leakages due to residual clouds. Thus it is important to estimate accurate errors in satellite derived SST products to have desired results in its applications.Generally for validation purposes satellite derived SST products are compared against the in-situ SSTs which have inaccuracies due to spatio/temporal inhomogeneity between in-situ and satellite measurements. A standard deviation in their difference fields usually have contributions from both satellite as well as the in-situ measurements. A real validation of any geophysical variable must require the knowledge of the "true" value of the said variable. Therefore a one-to-one comparison of satellite based SST with in-situ data does not truly provide us the real error in the satellite SST and there will be ambiguity due to errors in the in-situ measurements and their collocation differences. A Triple collocation (TC) or three-way error analysis using 3 mutually independent error-prone measurements, can be used to estimate root-mean square error (RMSE) associated with each of the measurements with high level of accuracy without treating any one system a perfectly-observed "truth". In this study we are estimating the absolute random errors associated with Pathfinder Version 5.3 Level-3C SST product Climate Data record. Along with the in-situ SST data, the third source of dataset used for this analysis is the AATSR reprocessing of climate (ARC) dataset for the corresponding period. All three SST observations are collocated, and statistics of difference between each pair is estimated. Instead of using a traditional TC analysis we have implemented the Extended Triple Collocation (ETC) approach to estimate the correlation coefficient of each measurement system w.r.t. the unknown target variable along with their RMSE.

  18. NASA Fundamental Remote Sensing Science Research Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The NASA Fundamental Remote Sensing Research Program is described. The program provides a dynamic scientific base which is continually broadened and from which future applied research and development can draw support. In particular, the overall objectives and current studies of the scene radiation and atmospheric effect characterization (SRAEC) project are reviewed. The SRAEC research can be generically structured into four types of activities including observation of phenomena, empirical characterization, analytical modeling, and scene radiation analysis and synthesis. The first three activities are the means by which the goal of scene radiation analysis and synthesis is achieved, and thus are considered priority activities during the early phases of the current project. Scene radiation analysis refers to the extraction of information describing the biogeophysical attributes of the scene from the spectral, spatial, and temporal radiance characteristics of the scene including the atmosphere. Scene radiation synthesis is the generation of realistic spectral, spatial, and temporal radiance values for a scene with a given set of biogeophysical attributes and atmospheric conditions.

  19. GPS-Based Precision Orbit Determination for a New Era of Altimeter Satellites: Jason-1 and ICESat

    NASA Technical Reports Server (NTRS)

    Luthcke, Scott B.; Rowlands, David D.; Lemoine, Frank G.; Zelensky, Nikita P.; Williams, Teresa A.

    2003-01-01

    Accurate positioning of the satellite center of mass is necessary in meeting an altimeter mission's science goals. The fundamental science observation is an altimetric derived topographic height. Errors in positioning the satellite's center of mass directly impact this fundamental observation. Therefore, orbit error is a critical Component in the error budget of altimeter satellites. With the launch of the Jason-1 radar altimeter (Dec. 2001) and the ICESat laser altimeter (Jan. 2003) a new era of satellite altimetry has begun. Both missions pose several challenges for precision orbit determination (POD). The Jason-1 radial orbit accuracy goal is 1 cm, while ICESat (600 km) at a much lower altitude than Jason-1 (1300 km), has a radial orbit accuracy requirement of less than 5 cm. Fortunately, Jason-1 and ICESat POD can rely on near continuous tracking data from the dual frequency codeless BlackJack GPS receiver and Satellite Laser Ranging. Analysis of current GPS-based solution performance indicates the l-cm radial orbit accuracy goal is being met for Jason-1, while radial orbit accuracy for ICESat is well below the 54x1 mission requirement. A brief overview of the GPS precision orbit determination methodology and results for both Jason-1 and ICESat are presented.

  20. Variational Bayesian Inversion of Quasi-Localized Seismic Attributes for the Spatial Distribution of Geological Facies

    NASA Astrophysics Data System (ADS)

    Nawaz, Muhammad Atif; Curtis, Andrew

    2018-04-01

    We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.

  1. The Experimental Probe of Inflationary Cosmology: A Mission Concept Study for NASA's Einstein Inflation Probe

    NASA Technical Reports Server (NTRS)

    2008-01-01

    When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.

  2. Aviation safety : information on FAA's data on operational errors at air traffic control towers

    DOT National Transportation Integrated Search

    2003-09-23

    A fundamental principle of aviation safety is the need to maintain adequate separation between aircraft and to ensure that aircraft maintain a safe distance from terrain, obstructions, and airspace that is not designated for routine air travel. Air t...

  3. The Quantum Socket: Wiring for Superconducting Qubits - Part 3

    NASA Astrophysics Data System (ADS)

    Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.

  4. Structure and functioning of dryland ecosystems in a changing world.

    PubMed

    Maestre, Fernando T; Eldridge, David J; Soliveres, Santiago; Kéfi, Sonia; Delgado-Baquerizo, Manuel; Bowker, Matthew A; García-Palacios, Pablo; Gaitán, Juan; Gallardo, Antonio; Lázaro, Roberto; Berdugo, Miguel

    2016-11-01

    Understanding how drylands respond to ongoing environmental change is extremely important for global sustainability. Here we review how biotic attributes, climate, grazing pressure, land cover change and nitrogen deposition affect the functioning of drylands at multiple spatial scales. Our synthesis highlights the importance of biotic attributes (e.g. species richness) in maintaining fundamental ecosystem processes such as primary productivity, illustrate how N deposition and grazing pressure are impacting ecosystem functioning in drylands worldwide, and highlight the importance of the traits of woody species as drivers of their expansion in former grasslands. We also emphasize the role of attributes such as species richness and abundance in controlling the responses of ecosystem functioning to climate change. This knowledge is essential to guide conservation and restoration efforts in drylands, as biotic attributes can be actively managed at the local scale to increase ecosystem resilience to global change.

  5. Structure and functioning of dryland ecosystems in a changing world

    PubMed Central

    Maestre, Fernando T.; Eldridge, David J.; Soliveres, Santiago; Kéfi, Sonia; Delgado-Baquerizo, Manuel; Bowker, Matthew A.; García-Palacios, Pablo; Gaitán, Juan; Gallardo, Antonio; Lázaro, Roberto; Berdugo, Miguel

    2017-01-01

    Understanding how drylands respond to ongoing environmental change is extremely important for global sustainability. Here we review how biotic attributes, climate, grazing pressure, land cover change and nitrogen deposition affect the functioning of drylands at multiple spatial scales. Our synthesis highlights the importance of biotic attributes (e.g. species richness) in maintaining fundamental ecosystem processes such as primary productivity, illustrate how N deposition and grazing pressure are impacting ecosystem functioning in drylands worldwide, and highlight the importance of the traits of woody species as drivers of their expansion in former grasslands. We also emphasize the role of attributes such as species richness and abundance in controlling the responses of ecosystem functioning to climate change. This knowledge is essential to guide conservation and restoration efforts in drylands, as biotic attributes can be actively managed at the local scale to increase ecosystem resilience to global change. PMID:28239303

  6. Application of Consider Covariance to the Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lundberg, John B.

    1996-01-01

    The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.

  7. Ebola, team communication, and shame: but shame on whom?

    PubMed

    Shannon, Sarah E

    2015-01-01

    Examined as an isolated situation, and through the lens of a rare and feared disease, Mr. Duncan's case seems ripe for second-guessing the physicians and nurses who cared for him. But viewed from the perspective of what we know about errors and team communication, his case is all too common. Nearly 440,000 patient deaths in the U.S. each year may be attributable to medical errors. Breakdowns in communication among health care teams contribute in the majority of these errors. The culture of health care does not seem to foster functional, effective communication between and among professionals. Why? And more importantly, why do we not do something about it?

  8. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  9. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  10. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  11. Promote Number Sense

    ERIC Educational Resources Information Center

    Gurganus, Susan

    2004-01-01

    "Number sense" is "an intuition about numbers that is drawn from all varied meanings of number" (NCTM, 1989, p. 39). Students with number sense understand that numbers are representative of objects, magnitudes, relationships, and other attributes; that numbers can be operated on, compared, and used for communication. It is fundamental knowledge…

  12. Physiological attributes of 11 Northwest conifer species

    Treesearch

    Ronni L. Korol

    2001-01-01

    The quantitative description and simulation of the fundamental processes that characterize forest growth are increasing in importance in forestry research. Predicting future forest growth, however, is compounded by the various combinations of temperature, humidity, precipitation, and atmospheric carbon dioxide concentration that may occur. One method of integrating new...

  13. Is Human-Computer Interaction Social or Parasocial?

    ERIC Educational Resources Information Center

    Sundar, S. Shyam

    Conducted in the attribution-research paradigm of social psychology, a study examined whether human-computer interaction is fundamentally social (as in human-human interaction) or parasocial (as in human-television interaction). All 30 subjects (drawn from an undergraduate class on communication) were exposed to an identical interaction with…

  14. Collaborative project to identify direct and distant pedigree relationships in apple

    USDA-ARS?s Scientific Manuscript database

    Pedigree information is fundamentally important in breeding programs, enabling breeders to know the source of valuable attributes and underlying alleles and to enlarge genetic diversity in a directed way. Many apple cultivars are related to each other through both recent and distant common ancestors...

  15. Learning Outcomes in Professional Contexts in Higher Education

    ERIC Educational Resources Information Center

    Prøitz, Tine S.; Havnes, Anton; Briggs, Mary; Scott, Ian

    2017-01-01

    With the policy of developing a transparent and competitive European higher education sector, learning outcomes (LOs) are attributed a foundation stone role in policy and curriculum development. A premise for their implementation is that they bear fundamental similarities across national, institutional or professional/disciplinary contexts. In…

  16. The Social Conquest of General Education

    ERIC Educational Resources Information Center

    Helfand, David J.

    2013-01-01

    The dominant model of university education featuring one-way communication in a desocialized setting that celebrates competition is fundamentally at odds with the evolutionarily attuned attributes of the human brain. Quest University Canada represents an alternative of engaged, collaborative learning that builds on the eusocial nature of our…

  17. On Valuing Peers: Theories of Learning and Intercultural Competence

    ERIC Educational Resources Information Center

    Cajander, Asa; Daniels, Mats; McDermott, Roger

    2012-01-01

    This paper investigates the links between the "contributing student pedagogy" and other forms of peer-mediated learning models, e.g. "open-ended group projects" and "communities of practice." We find that a fundamental concern in each of these models is the attribution of "value"; specifically, recognition…

  18. Study of responses of 64-story Rincon Building to Napa, Fremont, Piedmont, San Ramon earthquakes and ambient motions

    USGS Publications Warehouse

    Çelebi, Mehmet; Hooper, John; Klemencic, Ron

    2017-01-01

    We analyze the recorded responses of a 64-story, instrumented, concrete core shear wall building in San Francisco, California, equipped with tuned sloshing liquid dampers (TSDs) and buckling restraining braces (BRBs). Previously, only ambient data from the 72-channel array in the building were studied (Çelebi et al. 2013). Recently, the 24 August 2014 Mw 6.0 Napa and three other earthquakes were recorded. The peak accelerations of ambient and the larger Napa earthquake responses at the basement are 0.12 cm/s/s and 5.2 cm/s/s respectively—a factor of ~42. At the 61st level, they are 0.30 cm/s/s (ambient) and 16.8 cm/s/s (Napa), respectively—a factor of ~56. Fundamental frequencies (NS ~ 0.3, EW ~ 0.27 Hz) from earthquake responses vary within an insignificant frequency band of ~0.02–0.03 Hz when compared to those from ambient data. In the absence of soil-structure interaction (SSI), these small and insignificant differences may be attributed to (1) identification errors, (2) any nonlinear behavior, and (3) shaking levels that are not large enough to activate the BRBs and TSDs to make significant shifts in frequencies and increase damping.

  19. Preventing Unintended Disclosure of Personally Identifiable Data Following Anonymisation.

    PubMed

    Smith, Chris

    2017-01-01

    Errors and anomalies during the capture and processing of health data have the potential to place personally identifiable values into attributes of a dataset that are expected to contain non-identifiable values. Anonymisation focuses on those attributes that have been judged to enable identification of individuals. Attributes that are judged to contain non-identifiable values are not considered, but may be included in datasets that are shared by organisations. Consequently, organisations are at risk of sharing datasets that unintendedly disclose personally identifiable values through these attributes. This would have ethical and legal implications for organisations and privacy implications for individuals whose personally identifiable values are disclosed. In this paper, we formulate the problem of unintended disclosure following anonymisation, describe the necessary steps to address this problem, and discuss some key challenges to applying these steps in practice.

  20. Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry

    PubMed Central

    Keiflin, Ronald; Janak, Patricia H.

    2015-01-01

    Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275

  1. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  2. Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

    PubMed

    Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

    2002-06-07

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  3. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  4. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  5. A comparison of protocols and observer precision for measuring physical stream attributes

    USGS Publications Warehouse

    Whitacre, H.W.; Roper, B.B.; Kershner, J.L.

    2007-01-01

    Stream monitoring programs commonly measure physical attributes to assess the effect of land management on stream habitat. Variability associated with the measurement of these attributes has been linked to a number of factors, but few studies have evaluated variability due to differences in protocols. We compared six protocols, five used by the U.S. Department of Agriculture Forest Service and one by the U.S. Environmental Protection Agency, on six streams in Oregon and Idaho to determine whether differences in protocol affect values for 10 physical stream attributes. Results from Oregon and Idaho were combined for groups participating in both states, with significant differences in attribute means for 9 out of the 10 stream attributes. Significant differences occurred in 5 of 10 in Idaho, and 10 of 10 in Oregon. Coefficients of variation, signal-to-noise ratio, and root mean square error were used to evaluate measurement precision. There were differences among protocols for all attributes when states were analyzed separately and as a combined dataset. Measurement differences were influenced by choice of instruments, measurement method, measurement location, attribute definitions, and training approach. Comparison of data gathered by observers using different protocols will be difficult unless a core set of protocols for commonly measured stream attributes can be standardized among monitoring programs.

  6. Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.

    PubMed

    Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J

    2017-02-01

    Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. An assessment of the cultivated cropland class of NLCD 2006 using a multi-source and multi-criteria approach

    USGS Publications Warehouse

    Danielson, Patrick; Yang, Limin; Jin, Suming; Homer, Collin G.; Napton, Darrell

    2016-01-01

    We developed a method that analyzes the quality of the cultivated cropland class mapped in the USA National Land Cover Database (NLCD) 2006. The method integrates multiple geospatial datasets and a Multi Index Integrated Change Analysis (MIICA) change detection method that captures spectral changes to identify the spatial distribution and magnitude of potential commission and omission errors for the cultivated cropland class in NLCD 2006. The majority of the commission and omission errors in NLCD 2006 are in areas where cultivated cropland is not the most dominant land cover type. The errors are primarily attributed to the less accurate training dataset derived from the National Agricultural Statistics Service Cropland Data Layer dataset. In contrast, error rates are low in areas where cultivated cropland is the dominant land cover. Agreement between model-identified commission errors and independently interpreted reference data was high (79%). Agreement was low (40%) for omission error comparison. The majority of the commission errors in the NLCD 2006 cultivated crops were confused with low-intensity developed classes, while the majority of omission errors were from herbaceous and shrub classes. Some errors were caused by inaccurate land cover change from misclassification in NLCD 2001 and the subsequent land cover post-classification process.

  8. Fundamental Attributes of Exemplary State Special Education Dispute Resolution Systems

    ERIC Educational Resources Information Center

    Center for Appropriate Dispute Resolution in Special Education (CADRE), 2013

    2013-01-01

    Between Fall 2008 and Summer 2010, the Center for Appropriate Dispute Resolution in Special Education (CADRE) analyzed state special education dispute resolution systems and their components, with the objective of identifying particularly effective systems and creating a resource that other states could draw on when considering improvement…

  9. Using Questioning to Stimulate Mathematical Thinking

    ERIC Educational Resources Information Center

    Way, Jenni

    2008-01-01

    Good questioning techniques have long been regarded as a fundamental tool of effective teachers and research has found that "differences in students' thinking and reasoning could be attributed to the type of questions that teachers asked" (Wood, 2002). Past research shows that 93% of teacher questions were "lower order" knowledge-based questions…

  10. Using the "Zone" to Help Reach Every Learner

    ERIC Educational Resources Information Center

    Silver, Debbie

    2011-01-01

    Basically everything associated with maximizing student engagement, achievement, optimal learning environment, learning zone, and the like can be attributed to the work of Lev Vygotsky (1978). A Russian psychologist and social constructivist, Vygotsky (1896-1934) proposed a concept so fundamental to the theory of motivation that it undergirds…

  11. Polarity at Many Levels

    ERIC Educational Resources Information Center

    Flannery, Maura C.

    2004-01-01

    An attempt is made to find how polarity arises and is maintained, which is a central issue in development. It is a fundamental attribute of living things and cellular polarity is also important in the development of multicellular organisms and controversial new work indicates that polarization in mammals may occur much earlier than previously…

  12. An Empirical Assessment of the Form of Utility Functions

    ERIC Educational Resources Information Center

    Kirby, Kris N.

    2011-01-01

    Utility functions, which relate subjective value to physical attributes of experience, are fundamental to most decision theories. Seven experiments were conducted to test predictions of the most widely assumed mathematical forms of utility (power, log, and negative exponential), and a function proposed by Rachlin (1992). For pairs of gambles for…

  13. Understanding Economic Change in the Gilded Age.

    ERIC Educational Resources Information Center

    Campbell, Ballard C.

    1999-01-01

    Addresses the impediments involved in teaching the economic history of the Gilded Age. Presents six attributes of industrialization to use when teaching about the Gilded Age that concentrate on the fundamental components of economic change: (1) technology; (2) railroads; (3) corporations; (4) finance capitalism; (5) labor; and (6) retailing. (CMK)

  14. Population trends influence species ability to track climate change

    Treesearch

    Joel Ralston; William V. DeLuca; Richard E. Feldman; David I. King

    2016-01-01

    Shifts of distributions have been attributed to species tracking their fundamental climate niches through space. However, several studies have now demonstrated that niche tracking is imperfect, that species' climate niches may vary with population trends, and that geographic distributions may lag behind rapid climate change. These reports of imperfect niche...

  15. Estimating Surface Area of Sponges and Marine Gorgonians as Indicators of Habitat Availability on Caribbean Coral Reefs

    EPA Science Inventory

    Surface area and topographical complexity are fundamental attributes of shallow tropical coral reefs and can be used to estimate habitat for fish and invertebrates. This study presents empirical methods for estimating surface area provided by sponges and gorgonians in the Central...

  16. School Leadership in Times of Crisis

    ERIC Educational Resources Information Center

    Smith, Larry; Riley, Dan

    2012-01-01

    The leadership attributes and skills required of school leaders in times of crisis are fundamentally different from those generally required as part of the "normal" school environment. Strong school leadership generally is about positioning the school for the future, and about supporting and empowering staff and students in the pursuit of teaching…

  17. The Web: Can We Make It Easier To Find Information?

    ERIC Educational Resources Information Center

    Maddux, Cleborne D.

    1999-01-01

    Reviews problems with the World Wide Web that can be attributed to human error or ineptitude, and provides suggestions for improvement. Discusses poor Web design, poor use of search engines, and poor quality control by search engines and directories. (AEF)

  18. Communicating Uncertain Experimental Evidence

    ERIC Educational Resources Information Center

    Davis, Alexander L.; Fischhoff, Baruch

    2014-01-01

    Four experiments examined when laypeople attribute unexpected experimental outcomes to error, in foresight and in hindsight, along with their judgments of whether the data should be published. Participants read vignettes describing hypothetical experiments, along with the result of the initial observation, considered as either a possibility…

  19. Intrinsic fundamental frequency of vowels is moderated by regional dialect

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  20. Prediction, Error, and Adaptation during Online Sentence Comprehension

    ERIC Educational Resources Information Center

    Fine, Alex Brabham

    2013-01-01

    A fundamental challenge for human cognition is perceiving and acting in a world in which the statistics that characterize available sensory data are non-stationary. This thesis focuses on this problem specifically in the domain of sentence comprehension, where linguistic variability poses computational challenges to the processes underlying…

  1. Understanding Place Value

    ERIC Educational Resources Information Center

    Cooper, Linda L.; Tomayko, Ming C.

    2011-01-01

    Developing an understanding of place value and the base-ten number system is considered a fundamental goal of the early primary grades. For years, teachers have anecdotally reported that students struggle with place-value concepts. Among the common errors cited are misreading such numbers as 26 and 62 by seeing them as identical in meaning,…

  2. Fraction Operations: An Examination of Prospective Teachers' Errors Confidence, and Bias

    ERIC Educational Resources Information Center

    Young, Elaine; Zientek, Linda

    2011-01-01

    Fractions are important in young students' understanding of rational numbers and proportional reasoning. The teacher is fundamental in developing student understanding and competency in working with fractions. The present study spanned five years and investigated prospective teachers' competency and confidence with fraction operations as they…

  3. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  4. The late posterior negativity in ERP studies of episodic memory: action monitoring and retrieval of attribute conjunctions.

    PubMed

    Johansson, Mikael; Mecklinger, Axel

    2003-10-01

    The focus of the present paper is a late posterior negative slow wave (LPN) that has frequently been reported in event-related potential (ERP) studies of memory. An overview of these studies suggests that two broad classes of experimental conditions tend to elicit this component: (a) item recognition tasks associated with enhanced action monitoring demands arising from response conflict and (b) memory tasks that require the binding of items with contextual information specifying the study episode. A combined stimulus- and response-locked analysis of data from two studies mapping onto these classes allowed a temporal and functional decomposition of the LPN. While only the LPN observed in the item recognition task could be attributed to the involvement of a posteriorly distributed response-locked error-related negativity (or error negativity; ERN/Ne) occurring immediately after the response, the source-memory task was associated with a stimulus-locked negative slow wave occurring prior and during response execution that was evident when data were matched for response latencies. We argue that the presence of the former reflects action monitoring due to high levels of response conflict, whereas the latter reflects retrieval processes that may act to reconstruct the prior study episode when task-relevant attribute conjunctions are not readily recovered or need continued evaluation.

  5. Extension of sonic anemometry to high subsonic Mach number flows

    NASA Astrophysics Data System (ADS)

    Otero, R.; Lowe, K. T.; Ng, W. F.

    2017-03-01

    In the literature, the application of sonic anemometry has been limited to low subsonic Mach number, near-incompressible flow conditions. To the best of the authors’ knowledge, this paper represents the first time a sonic anemometry approach has been used to characterize flow velocity beyond Mach 0.3. Using a high speed jet, flow velocity was measured using a modified sonic anemometry technique in flow conditions up to Mach 0.83. A numerical study was conducted to identify the effects of microphone placement on the accuracy of the measured velocity. Based on estimated error strictly due to uncertainty in time-of-acoustic flight, a random error of +/- 4 m s-1 was identified for the configuration used in this experiment. Comparison with measurements from a Pitot probe indicated a velocity RMS error of +/- 9 m s-1. The discrepancy in error is attributed to a systematic error which may be calibrated out in future work. Overall, the experimental results from this preliminary study support the use of acoustics for high subsonic flow characterization.

  6. Apollo 15 mission report: Apollo 15 guidance, navigation, and control system performance analysis report (supplement 1)

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This report contains the results of additional studies which were conducted to confirm the conclusions of the MSC Mission Report and contains analyses which were not completed in time to meet the mission report deadline. The LM IMU data were examined during the lunar descent and ascent phases. Most of the PGNCS descent absolute velocity error was caused by platform misalignments. PGNCS radial velocity divergence from AGS during the early part of descent was partially caused by PGNCS gravity computation differences from AGS. The remainder of the differences between PGNCS and AGS velocity were easily attributable to attitude reference alignment differences and tolerable instrument errors. For ascent the PGNCS radial velocity error at insertion was examined. The total error of 10.8 ft/sec was well within mission constraints but larger than expected. Of the total error, 2.30 ft/sec was PIPA bias error, which was suspected to exist pre-lunar liftoff. The remaining 8.5 ft/sec is most probably satisified with a large pre-liftoff planform misalignment.

  7. Prediction skill of tropical synoptic scale transients from ECMWF and NCEP ensemble prediction systems

    DOE PAGES

    Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby; ...

    2016-12-05

    The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less

  8. Prediction skill of tropical synoptic scale transients from ECMWF and NCEP ensemble prediction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby

    The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less

  9. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II: Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2006-01-01

    Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.

  10. Neural interface methods and apparatus to provide artificial sensory capabilities to a subject

    DOEpatents

    Buerger, Stephen P.; Olsson, III, Roy H.; Wojciechowski, Kenneth E.; Novick, David K.; Kholwadwala, Deepesh K.

    2017-01-24

    Embodiments of neural interfaces according to the present invention comprise sensor modules for sensing environmental attributes beyond the natural sensory capability of a subject, and communicating the attributes wirelessly to an external (ex-vivo) portable module attached to the subject. The ex-vivo module encodes and communicates the attributes via a transcutaneous inductively coupled link to an internal (in-vivo) module implanted within the subject. The in-vivo module converts the attribute information into electrical neural stimuli that are delivered to a peripheral nerve bundle within the subject, via an implanted electrode. Methods and apparatus according to the invention incorporate implantable batteries to power the in-vivo module allowing for transcutaneous bidirectional communication of low voltage (e.g. on the order of 5 volts) encoded signals as stimuli commands and neural responses, in a robust, low-error rate, communication channel with minimal effects to the subjects' skin.

  11. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  12. A clinic-based study of refractive errors, strabismus, and amblyopia in pediatric age-group.

    PubMed

    Al-Tamimi, Elham R; Shakeel, Ayisha; Yassin, Sanaa A; Ali, Syed I; Khan, Umar A

    2015-01-01

    The purpose of this cross-sectional observational study was to determine the distribution and patterns of refractive errors, strabismus, and amblyopia in children seen at a pediatric eye care. The study was conducted in a Private Hospital in Dammam, Kingdom of Saudi Arabia, from March to July 2013. During this period, a total of 1350 children, aged 1-15 years were seen at this Center's Pediatric Ophthalmology Unit. All the children underwent complete ophthalmic examination with cycloplegic refraction. Refractive errors accounted for 44.4% of the cases, the predominant refractive error being hypermetropia which represented 83%. Strabismus and amblyopia were present in 38% and 9.1% of children, respectively. In this clinic-based study, the focus was on the frequency of refractive errors, strabismus, and amblyopia which were considerably high. Hypermetropia was the predominant refractive error in contrast to other studies in which myopia was more common. This could be attributed to the criteria for sample selection since it was clinic-based rather than a population-based study. However, it is important to promote public education on the significance of early detection of refractive errors, and have periodic screening in schools.

  13. Inspection error and its adverse effects - A model with implications for practitioners

    NASA Technical Reports Server (NTRS)

    Collins, R. D., Jr.; Case, K. E.; Bennett, G. K.

    1978-01-01

    Inspection error has clearly been shown to have adverse effects upon the results desired from a quality assurance sampling plan. These effects upon performance measures have been well documented from a statistical point of view. However, little work has been presented to convince the QC manager of the unfavorable cost consequences resulting from inspection error. This paper develops a very general, yet easily used, mathematical cost model. The basic format of the well-known Guthrie-Johns model is used. However, it is modified as required to assess the effects of attributes sampling errors of the first and second kind. The economic results, under different yet realistic conditions, will no doubt be of interest to QC practitioners who face similar problems daily. Sampling inspection plans are optimized to minimize economic losses due to inspection error. Unfortunately, any error at all results in some economic loss which cannot be compensated for by sampling plan design; however, improvements over plans which neglect the presence of inspection error are possible. Implications for human performance betterment programs are apparent, as are trade-offs between sampling plan modification and inspection and training improvements economics.

  14. Prediction models for Arabica coffee beverage quality based on aroma analyses and chemometrics.

    PubMed

    Ribeiro, J S; Augusto, F; Salva, T J G; Ferreira, M M C

    2012-11-15

    In this work, soft modeling based on chemometric analyses of coffee beverage sensory data and the chromatographic profiles of volatile roasted coffee compounds is proposed to predict the scores of acidity, bitterness, flavor, cleanliness, body, and overall quality of the coffee beverage. A partial least squares (PLS) regression method was used to construct the models. The ordered predictor selection (OPS) algorithm was applied to select the compounds for the regression model of each sensory attribute in order to take only significant chromatographic peaks into account. The prediction errors of these models, using 4 or 5 latent variables, were equal to 0.28, 0.33, 0.35, 0.33, 0.34 and 0.41, for each of the attributes and compatible with the errors of the mean scores of the experts. Thus, the results proved the feasibility of using a similar methodology in on-line or routine applications to predict the sensory quality of Brazilian Arabica coffee. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. The Clinical Assessment in the Legal Field: An Empirical Study of Bias and Limitations in Forensic Expertise

    PubMed Central

    Iudici, Antonio; Salvini, Alessandro; Faccio, Elena; Castelnuovo, Gianluca

    2015-01-01

    According to the literature, psychological assessment in forensic contexts is one of the most controversial application areas for clinical psychology. This paper presents a review of systematic judgment errors in the forensic field. Forty-six psychological reports written by psychologists, court consultants, have been analyzed with content analysis to identify typical judgment errors related to the following areas: (a) distortions in the attribution of causality, (b) inferential errors, and (c) epistemological inconsistencies. Results indicated that systematic errors of judgment, usually referred also as “the man in the street,” are widely present in the forensic evaluations of specialist consultants. Clinical and practical implications are taken into account. This article could lead to significant benefits for clinical psychologists who want to deal with this sensitive issue and are interested in improving the quality of their contribution to the justice system. PMID:26648892

  16. Blind Braille readers mislocate tactile stimuli.

    PubMed

    Sterr, Annette; Green, Lisa; Elbert, Thomas

    2003-05-01

    In a previous experiment, we observed that blind Braille readers produce errors when asked to identify on which finger of one hand a light tactile stimulus had occurred. With the present study, we aimed to specify the characteristics of this perceptual error in blind and sighted participants. The experiment confirmed that blind Braille readers mislocalised tactile stimuli more often than sighted controls, and that the localisation errors occurred significantly more often at the right reading hand than at the non-reading hand. Most importantly, we discovered that the reading fingers showed the smallest error frequency, but the highest rate of stimulus attribution. The dissociation of perceiving and locating tactile stimuli in the blind suggests altered tactile information processing. Neuroplasticity, changes in tactile attention mechanisms as well as the idea that blind persons may employ different strategies for tactile exploration and object localisation are discussed as possible explanations for the results obtained.

  17. Headaches associated with refractive errors: myth or reality?

    PubMed

    Gil-Gouveia, R; Martins, I P

    2002-04-01

    Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.

  18. Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range

    NASA Astrophysics Data System (ADS)

    Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong

    2011-12-01

    Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.

  19. Current State of Economic Returns from Education in China's Ethnic Regions and Explorations into Ways of Improvement

    ERIC Educational Resources Information Center

    Lijun, Zhang; Fei, Wang

    2010-01-01

    Economic development and social progress in China's ethnic minority regions depend on improvements in population attributes brought about by education. Developing education in China's ethnic regions is a project of fundamental significance for realizing sustainable economic and social development in the ethnic regions. Improving the economic…

  20. Fundamental Ideas of Social Psychology. SSEC Publication No. 149.

    ERIC Educational Resources Information Center

    Weatherley, Donald

    Core ideas in the discipline of social psychology are examined in this publication. Social psychology is the study of social behavior based upon individual psychological attributes or personality. An individual's personality can be thought of as inner states of readiness which predispose a person to respond in certain ways in social situations. A…

  1. Science in the News: An Evaluation of Students' Scientific Literacy

    ERIC Educational Resources Information Center

    Murcia, Karen

    2009-01-01

    Understanding and evaluating reports of science in the media is frequently stated as an attribute of a scientifically literate person, with some researchers suggesting it should be fundamental to any study of scientific literacy. Constructive engagement with science news briefs requires individuals to understand the terms used, take a critical…

  2. Student-Centered Learning: Functional Requirements for Integrated Systems to Optimize Learning

    ERIC Educational Resources Information Center

    Glowa, Liz; Goodell, Jim

    2016-01-01

    The realities of the 21st-century learner require that schools and educators fundamentally change their practice. "Educators must produce college- and career-ready graduates that reflect the future these students will face. And, they must facilitate learning through means that align with the defining attributes of this generation of…

  3. Organize Your School for Improvement

    ERIC Educational Resources Information Center

    Truby, William F.

    2017-01-01

    W. Edwards Deming has suggested 96% of organization performance is a function of the organization's structure. He contends only about 4% of an organization's performance is attributable to the people. This is a fundamental difference as most school leaders work with the basic assumption that 80% of a school's performance is related to staff and…

  4. Measuring the Level of Complexity of Scientific Inquiries: The LCSI Index

    ERIC Educational Resources Information Center

    Eilam, Efrat

    2015-01-01

    The study developed and applied an index for measuring the level of complexity of full authentic scientific inquiry. Complexity is a fundamental attribute of real life scientific research. The level of complexity is an overall reflection of complex cognitive and metacognitive processes which are required for navigating the authentic inquiry…

  5. Respect in Principal-Teacher Relations at Primary Schools in Turkey

    ERIC Educational Resources Information Center

    Güngör, Sabri; Aydin, Inayet; Memduhoglu, Hasan Basri; Oguz, Ebru

    2013-01-01

    Respect means consideration of actions and requests of others and confirmation of values of others. School is a social institution where students are equipped with knowledge and skills, as well as fundamental character attributes. Respect among students, teachers, administrators, parents, and other staff at schools is of importance. In this study,…

  6. Comparability of [0-Level] GCE Grades in 1968 and 1973.

    ERIC Educational Resources Information Center

    Backhouse, John K.

    1978-01-01

    Willmott's comparison of General Certificate of Education (GCE) scores in 1968 and 1973 is reexamined. The trend toward an increasing percentage of students who pass is confirmed, but estimates of standard errors indicate that subtest differences may be attributed to the sampling plan. (CP)

  7. Research Measures for Dyscalculia: A Validity and Reliability Study.

    ERIC Educational Resources Information Center

    Geiman, R. M.

    1986-01-01

    This study sought to evaluate a measure of dyscalculia to determine its validity and reliability. It also tested use of the instrument with seventh graders and ascertained where errors attributed to dyscalculia were also present in an average sample of seventh graders. Results varied. (MNS)

  8. An Empirical Examination of Weiner's Critique of Attribution Research.

    ERIC Educational Resources Information Center

    Covington, Martin V.; Omelich, Carol L.

    1984-01-01

    Weiner's allegations of errors in testing his theory (presumed detrimental effects of investigating a restricted range of variables, use of expectancy changes as a mediating variable, and presumed inappropriateness of classroom performance as a dependent variable) are evaluated. Disconfirmation of Weiner's predictions occurs irrespective of…

  9. Motive attribution asymmetry for love vs. hate drives intractable conflict.

    PubMed

    Waytz, Adam; Young, Liane L; Ginges, Jeremy

    2014-11-04

    Five studies across cultures involving 661 American Democrats and Republicans, 995 Israelis, and 1,266 Palestinians provide previously unidentified evidence of a fundamental bias, what we term the "motive attribution asymmetry," driving seemingly intractable human conflict. These studies show that in political and ethnoreligious intergroup conflict, adversaries tend to attribute their own group's aggression to ingroup love more than outgroup hate and to attribute their outgroup's aggression to outgroup hate more than ingroup love. Study 1 demonstrates that American Democrats and Republicans attribute their own party's involvement in conflict to ingroup love more than outgroup hate but attribute the opposing party's involvement to outgroup hate more than ingroup love. Studies 2 and 3 demonstrate this biased attributional pattern for Israelis and Palestinians evaluating their own group and the opposing group's involvement in the current regional conflict. Study 4 demonstrates in an Israeli population that this bias increases beliefs and intentions associated with conflict intractability toward Palestinians. Finally, study 5 demonstrates, in the context of American political conflict, that offering Democrats and Republicans financial incentives for accuracy in evaluating the opposing party can mitigate this bias and its consequences. Although people find it difficult to explain their adversaries' actions in terms of love and affiliation, we suggest that recognizing this attributional bias and how to reduce it can contribute to reducing human conflict on a global scale.

  10. Motive attribution asymmetry for love vs. hate drives intractable conflict

    PubMed Central

    Waytz, Adam; Young, Liane L.; Ginges, Jeremy

    2014-01-01

    Five studies across cultures involving 661 American Democrats and Republicans, 995 Israelis, and 1,266 Palestinians provide previously unidentified evidence of a fundamental bias, what we term the “motive attribution asymmetry,” driving seemingly intractable human conflict. These studies show that in political and ethnoreligious intergroup conflict, adversaries tend to attribute their own group’s aggression to ingroup love more than outgroup hate and to attribute their outgroup’s aggression to outgroup hate more than ingroup love. Study 1 demonstrates that American Democrats and Republicans attribute their own party’s involvement in conflict to ingroup love more than outgroup hate but attribute the opposing party’s involvement to outgroup hate more than ingroup love. Studies 2 and 3 demonstrate this biased attributional pattern for Israelis and Palestinians evaluating their own group and the opposing group’s involvement in the current regional conflict. Study 4 demonstrates in an Israeli population that this bias increases beliefs and intentions associated with conflict intractability toward Palestinians. Finally, study 5 demonstrates, in the context of American political conflict, that offering Democrats and Republicans financial incentives for accuracy in evaluating the opposing party can mitigate this bias and its consequences. Although people find it difficult to explain their adversaries’ actions in terms of love and affiliation, we suggest that recognizing this attributional bias and how to reduce it can contribute to reducing human conflict on a global scale. PMID:25331879

  11. Planetary Transmission Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.

    2004-01-01

    This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.

  12. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  13. Enterprise Systems Analysis

    DTIC Science & Technology

    2016-03-14

    flows , or continuous state changes, with feedback loops and lags modeled in the flow system. Agent based simulations operate using a discrete event...DeLand, S. M., Rutherford, B . M., Diegert, K. V., & Alvin, K. F. (2002). Error and uncertainty in modeling and simulation . Reliability Engineering...intrinsic complexity of the underlying social systems fundamentally limits the ability to make

  14. Natural Language Techniques for Decision Support Based on Patient Complaints

    ERIC Educational Resources Information Center

    ElMessiry, Adel Magdi

    2016-01-01

    Complaining is a fundamental human characteristic that has prevailed throughout the ages. We normally complain about something that went wrong. Patient complaints are no exception; they focus on problems that occurred during the episode of care. The Institute of Medicine estimated that each year thousands of patients die due to medical errors. The…

  15. Two-voice fundamental frequency estimation

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain

    2002-05-01

    An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  16. Errors detected in pediatric oral liquid medication doses prepared in an automated workflow management system.

    PubMed

    Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan

    2018-02-01

    The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  17. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  18. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  19. Neuromotor Noise Is Malleable by Amplifying Perceived Errors

    PubMed Central

    Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar

    2016-01-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  20. Fundamental role of the fostriecin unsaturated lactone and implications for selective protein phosphatase inhibition.

    PubMed

    Buck, Suzanne B; Hardouin, Christophe; Ichikawa, Satoshi; Soenen, Danielle R; Gauss, C-M; Hwang, Inkyu; Swingle, Mark R; Bonness, Kathy M; Honkanen, Richard E; Boger, Dale L

    2003-12-24

    Key derivatives and analogues of fostriecin were prepared and examined that revealed a fundamental role for the unsaturated lactone and confirmed the essential nature of the phosphate monoester. Thus, an identical 200-fold reduction in protein phosphatase 2A (PP2A) inhibition is observed with either the saturated lactone (7) or with an analogue that lacks the entire lactone (15). This 200-fold increase in PP2A inhibition attributable to the unsaturated lactone potentially may be due to reversible C269 alkylation within the PP beta12-beta13 active site loop accounting for PP2A/4 potency and selectivity.

  1. 75 FR 17604 - Federal Motor Vehicle Safety Standards; Roof Crush Resistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-07

    ... Safety Analysis & Forensic Engineering, LLC (SAFE) brought to our attention errors in the preamble that incorrectly attributed to it the comments of another organization, Safety Analysis, Inc. Both of these... Safety Analysis, Inc. SAFE noted that there is no affiliation between SAFE and Safety Analysis, Inc. and...

  2. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  3. Genetic and Environmental Contributions to Educational Attainment in Australia.

    ERIC Educational Resources Information Center

    Miller, Paul; Mulvey, Charles; Martin, Nick

    2001-01-01

    Data from a large sample of Australian twins indicate that 50 to 65 percent of variance in educational attainments can be attributed to genetic endowments. Only about 25 to 40 percent may be due to environmental factors, depending on adjustments for measurement error and assortative mating. (Contains 51 references.) (MLH)

  4. Descriptive Statistical Attributes of Special Education Data Sets

    ERIC Educational Resources Information Center

    Felder, Valerie

    2013-01-01

    Micceri (1989) examined the distributional characteristics of 440 large-sample achievement and psychometric measures. All the distributions were found to be nonnormal at alpha = 0.01. Micceri indicated three factors that might contribute to a non-Gaussian error distribution in the population. The first factor is subpopulations within a target…

  5. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  6. MERIT DEM: A new high-accuracy global digital elevation model and its merit to global hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Yamazaki, D.; Ikeshima, D.; Neal, J. C.; O'Loughlin, F.; Sampson, C. C.; Kanae, S.; Bates, P. D.

    2017-12-01

    Digital Elevation Models (DEM) are fundamental data for flood modelling. While precise airborne DEMs are available in developed regions, most parts of the world rely on spaceborne DEMs which include non-negligible height errors. Here we show the most accurate global DEM to date at 90m resolution by eliminating major error components from the SRTM and AW3D DEMs. Using multiple satellite data and multiple filtering techniques, we addressed absolute bias, stripe noise, speckle noise and tree height bias from spaceborne DEMs. After the error removal, significant improvements were found in flat regions where height errors were larger than topography variability, and landscapes features such as river networks and hill-valley structures became clearly represented. We found the topography slope of the previous DEMs was largely distorted in most of world major floodplains (e.g. Ganges, Nile, Niger, Mekong) and swamp forests (e.g. Amazon, Congo, Vasyugan). The developed DEM will largely reduce the uncertainty in both global and regional flood modelling.

  7. Infrared line intensity measurements in the v = 0-1 band of the ClO radical

    NASA Technical Reports Server (NTRS)

    Burkholder, James B.; Howard, Carleton J.; Hammer, Philip D.; Goldman, Aaron

    1989-01-01

    Integrated line intensity measurements in the ClO-radical fundamental vibrational v = 0-1 band were carried out using a high-resolution Fourier transform spectrometer coupled to a long-path-length absorption cell. The results of a series of measurements designed to minimize systematic errors, yielded a value of the fundamental IR band intensity of the ClO-radical equal to 9.68 + or - 1.45/sq cm per atm at 296 K. This result is consistent with all the earlier published results, with the exception of measurements reported by Kostiuk et al. (1986) and Lang et al. (1988).

  8. The use of fundamental frequency for lexical segmentation in listeners with cochlear implants.

    PubMed

    Spitzer, Stephanie; Liss, Julie; Spahr, Tony; Dorman, Michael; Lansford, Kaitlin

    2009-06-01

    Fundamental frequency (F0) variation is one of a number of acoustic cues normal hearing listeners use for guiding lexical segmentation of degraded speech. This study examined whether F0 contour facilitates lexical segmentation by listeners fitted with cochlear implants (CIs). Lexical boundary error patterns elicited under unaltered and flattened F0 conditions were compared across three groups: listeners with conventional CI, listeners with CI and preserved low-frequency acoustic hearing, and normal hearing listeners subjected to CI simulations. Results indicate that all groups attended to syllabic stress cues to guide lexical segmentation, and that F0 contours facilitated performance for listeners with low-frequency hearing.

  9. Pilot age and error in air taxi crashes.

    PubMed

    Rebok, George W; Qiang, Yandong; Baker, Susan P; Li, Guohua

    2009-07-01

    The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air taxi crashes. Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 1751 air taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air taxi crashes. Lack of age-related differences in pilot error may be attributable to the "safe worker effect."

  10. Failure of endodontic treatment: The usual suspects.

    PubMed

    Tabassum, Sadia; Khan, Farhan Raza

    2016-01-01

    Inappropriate mechanical debridement, persistence of bacteria in the canals and apex, poor obturation quality, over and under extension of the root canal filling, and coronal leakage are some of the commonly attributable causes of failure. Despite the high success rate of endodontic treatment, failures do occur in a large number of cases and most of the times can be attributed to the already stated causes. With an ever increasing number of endodontic treatments being done each day, it has become imperative to avoid or minimize the most fundamental of reasons leading to endodontic failure. This paper reviews the most common causes of endodontic failure along with radiographic examples.

  11. Failure of endodontic treatment: The usual suspects

    PubMed Central

    Tabassum, Sadia; Khan, Farhan Raza

    2016-01-01

    Inappropriate mechanical debridement, persistence of bacteria in the canals and apex, poor obturation quality, over and under extension of the root canal filling, and coronal leakage are some of the commonly attributable causes of failure. Despite the high success rate of endodontic treatment, failures do occur in a large number of cases and most of the times can be attributed to the already stated causes. With an ever increasing number of endodontic treatments being done each day, it has become imperative to avoid or minimize the most fundamental of reasons leading to endodontic failure. This paper reviews the most common causes of endodontic failure along with radiographic examples. PMID:27011754

  12. An Improved Unsupervised Image Segmentation Evaluation Approach Based on - and Over-Segmentation Aware

    NASA Astrophysics Data System (ADS)

    Su, Tengfei

    2018-04-01

    In this paper, an unsupervised evaluation scheme for remote sensing image segmentation is developed. Based on a method called under- and over-segmentation aware (UOA), the new approach is improved by overcoming the defect in the part of estimating over-segmentation error. Two cases of such error-prone defect are listed, and edge strength is employed to devise a solution to this issue. Two subsets of high resolution remote sensing images were used to test the proposed algorithm, and the experimental results indicate its superior performance, which is attributed to its improved OSE detection model.

  13. Examples of Nonconservatism in the CARE 3 Program

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1988-01-01

    This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.

  14. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  15. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Star centroiding error compensation for intensified star sensors.

    PubMed

    Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun

    2016-12-26

    A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

  17. The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals

    NASA Astrophysics Data System (ADS)

    Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat

    2018-01-01

    Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.

  18. Unraveling the Mystery of the Origin of Mathematical Problems: Using a Problem-Posing Framework with Prospective Mathematics Teachers

    ERIC Educational Resources Information Center

    Contreras, Jose

    2007-01-01

    In this article, I model how a problem-posing framework can be used to enhance our abilities to systematically generate mathematical problems by modifying the attributes of a given problem. The problem-posing model calls for the application of the following fundamental mathematical processes: proving, reversing, specializing, generalizing, and…

  19. On Processing Chinese Ideographs and English Words: Some Implications from Stroop-Test Results.

    ERIC Educational Resources Information Center

    Biederman, Irving; Tsao, Yao-Chung

    1979-01-01

    When Chinese adults tried to name the color of characters which represented conflicting color words, they showed greater interference than did English speaking readers of the same task in English. This effect cannot be attributed to bilingualism. There may be fundamental differences in the perceptual demands of reading Chinese and English.…

  20. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less

  1. Adult age differences in unconscious transference: source confusion or identity blending?

    PubMed

    Perfect, Timothy J; Harris, Lucy J

    2003-06-01

    Eyewitnesses are known often to falsely identify a familiar but innocent bystander when asked to pick out a perpetrator from a lineup. Such unconscious transference errors have been attributed to either identity confusions at encoding or source retrieval errors. Three experiments contrasted younger and older adults in their susceptibility to such misidentifications. Participants saw photographs of perpetrators, then a series of mug shots of innocent bystanders. A week later, they saw lineups containing bystanders (and others containing perpetrators in Experiment 3) and were asked whether any of the perpetrators were present. When younger faces were used as stimuli (Experiments 1 and 3), older adults showed higher rates of transference errors. When older faces were used as stimuli (Experiments 2 and 3), no such age effects in rates of unconscious transference were apparent. In addition, older adults in Experiment 3 showed an own-age bias effect for correct identification of targets. Unconscious transference errors were found to be due to both source retrieval errors and identity confusions, but age-related increases were found only in the latter.

  2. Observation of sum-frequency-generation-induced cascaded four-wave mixing using two crossing femtosecond laser pulses in a 0.1 mm beta-barium-borate crystal.

    PubMed

    Liu, Weimin; Zhu, Liangdong; Fang, Chong

    2012-09-15

    We demonstrate the simultaneous generation of multicolor femtosecond laser pulses spanning the wavelength range from UV to near IR in a 0.1 mm Type I beta-barium borate crystal from 800 nm fundamental and weak IR super-continuum white light (SCWL) pulses. The multicolor broadband laser pulses observed are attributed to two concomitant cascaded four-wave mixing (CFWM) processes as corroborated by calculation: (1) directly from the two incident laser pulses; (2) by the sum-frequency generation (SFG) induced CFWM process (SFGFWM). The latter signal arises from the interaction between the frequency-doubled fundamental pulse (400 nm) and the SFG pulse generated in between the fundamental and IR-SCWL pulses. The versatility and simplicity of this spatially dispersed multicolor self-compressed laser pulse generation offer compact and attractive methods to conduct femtosecond stimulated Raman spectroscopy and time-resolved multicolor spectroscopy.

  3. Binary Hypothesis Testing With Byzantine Sensors: Fundamental Tradeoff Between Security and Efficiency

    NASA Astrophysics Data System (ADS)

    Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin

    2018-03-01

    This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.

  4. Modelling psychiatric and cultural possession phenomena with suggestion and fMRI.

    PubMed

    Deeley, Quinton; Oakley, David A; Walsh, Eamonn; Bell, Vaughan; Mehta, Mitul A; Halligan, Peter W

    2014-04-01

    Involuntary movements occur in a variety of neuropsychiatric disorders and culturally influenced dissociative states (e.g., delusions of alien control and attributions of spirit possession). However, the underlying brain processes are poorly understood. We combined suggestion and fMRI in 15 highly hypnotically susceptible volunteers to investigate changes in brain activity accompanying different experiences of loss of self-control of movement. Suggestions of external personal control and internal personal control over involuntary movements modelled delusions of control and spirit possession respectively. A suggestion of impersonal control by a malfunctioning machine modelled technical delusions of control, where involuntary movements are attributed to the influence of machines. We found that (i) brain activity and/or connectivity significantly varied with different experiences and attributions of loss of agency; (ii) compared to the impersonal control condition, both external and internal personal alien control were associated with increased connectivity between primary motor cortex (M1) and brain regions involved in attribution of mental states and representing the self in relation to others; (iii) compared to both personal alien control conditions, impersonal control of movement was associated with increased activity in brain regions involved in error detection and object imagery; (iv) there were no significant differences in brain activity, and minor differences in M1 connectivity, between the external and internal personal alien control conditions. Brain networks supporting error detection and object imagery, together with representation of self and others, are differentially recruited to support experiences of impersonal and personal control of involuntary movements. However, similar brain systems underpin attributions and experiences of external and internal alien control of movement. Loss of self-agency for movement can therefore accompany different kinds of experience of alien control supported by distinct brain mechanisms. These findings caution against generalization about single cognitive processes or brain systems underpinning different experiences of loss of self-control of movement. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Photometric Characterization of the Dark Energy Camera

    DOE PAGES

    Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.; ...

    2018-04-02

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >10 7 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with <1 mmag of correlated photometric errors for stars separated by ≥20''. On cloudless nights, any departures ofmore » the exposure zeropoints from a secant airmass law exceeding 1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6'' and 8'' diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. Here, we find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less

  6. Astrometric Calibration and Performance of the Dark Energy Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, G. M.; Armstrong, R.; Plazas, A. A.

    2017-05-30

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520~Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry ofmore » $>10^7$ stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the RMS variation in aperture magnitudes of bright stars on cloudless nights down to 2--3 mmag, with <1 mmag of correlated photometric errors for stars separated by $$\\ge20$$". On cloudless nights, any departures of the exposure zeropoints from a secant airmass law exceeding >1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6" and 8" diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2-degree field drifts over months by up to $$\\pm7$$ mmag, in a nearly-wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less

  7. Photometric Characterization of the Dark Energy Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >10 7 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with <1 mmag of correlated photometric errors for stars separated by ≥20''. On cloudless nights, any departures ofmore » the exposure zeropoints from a secant airmass law exceeding 1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6'' and 8'' diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. Here, we find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less

  8. Photometric Characterization of the Dark Energy Camera

    NASA Astrophysics Data System (ADS)

    Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.; Burke, D. L.; Diehl, H. T.; Gruendl, R. A.; Johnson, M. D.; Li, T. S.; Rykoff, E. S.; Walker, A. R.; Wester, W.; Yanny, B.

    2018-05-01

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >107 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with <1 mmag of correlated photometric errors for stars separated by ≥20″. On cloudless nights, any departures of the exposure zeropoints from a secant airmass law exceeding 1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6″ and 8″ diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.

  9. Source Attribution of Cyanides Using Anionic Impurity Profiling, Stable Isotope Ratios, Trace Elemental Analysis and Chemometrics.

    PubMed

    Mirjankar, Nikhil S; Fraga, Carlos G; Carman, April J; Moran, James J

    2016-02-02

    Chemical attribution signatures (CAS) for chemical threat agents (CTAs), such as cyanides, are being investigated to provide an evidentiary link between CTAs and specific sources to support criminal investigations and prosecutions. Herein, stocks of KCN and NaCN were analyzed for trace anions by high performance ion chromatography (HPIC), carbon stable isotope ratio (δ(13)C) by isotope ratio mass spectrometry (IRMS), and trace elements by inductively coupled plasma optical emission spectroscopy (ICP-OES). The collected analytical data were evaluated using hierarchical cluster analysis (HCA), Fisher-ratio (F-ratio), interval partial least-squares (iPLS), genetic algorithm-based partial least-squares (GAPLS), partial least-squares discriminant analysis (PLSDA), K nearest neighbors (KNN), and support vector machines discriminant analysis (SVMDA). HCA of anion impurity profiles from multiple cyanide stocks from six reported countries of origin resulted in cyanide samples clustering into three groups, independent of the associated alkali metal (K or Na). The three groups were independently corroborated by HCA of cyanide elemental profiles and corresponded to countries each having one known solid cyanide factory: Czech Republic, Germany, and United States. Carbon stable isotope measurements resulted in two clusters: Germany and United States (the single Czech stock grouped with United States stocks). Classification errors for two validation studies using anion impurity profiles collected over five years on different instruments were as low as zero for KNN and SVMDA, demonstrating the excellent reliability associated with using anion impurities for matching a cyanide sample to its factory using our current cyanide stocks. Variable selection methods reduced errors for those classification methods having errors greater than zero; iPLS-forward selection and F-ratio typically provided the lowest errors. Finally, using anion profiles to classify cyanides to a specific stock or stock group for a subset of United States stocks resulted in cross-validation errors ranging from 0 to 5.3%.

  10. Avoidance of the Real and Anxiety about the Unreal: Attachment Style and Video-Gaming

    ERIC Educational Resources Information Center

    Coulson, Mark; Oskis, Andrea; Gould, Rebecca L

    2017-01-01

    In this article, the authors discuss the light and dark side of attachments and attachment style in physical and digital worlds. They argue that many games offer opportunities for the generation of new and meaningful attachments to both physical and digital others. They discuss two "fundamental attachment errors" and show how these can…

  11. Corrigendum to "Fundamental neutron physics beamline at the spallation neutron source at ORNL" [Nucl. Instrum. Methods Phys. Res. A 773 (2015) 45-51

    NASA Astrophysics Data System (ADS)

    Fomin, N.; Greene, G. L.; Allen, R. R.; Cianciolo, V.; Crawford, C.; Ito, T. M.; Huffman, P. R.; Iverson, E. B.; Mahurin, R.; Snow, W. M.

    2015-07-01

    The authors regret that there was an error in the author list of the original publication. The name of author Dr. Ito was misspelled. The correct author list is as above. The authors would like to apologise for any inconvenience caused.

  12. The Rhetoric of Masculinity: Origins, Institutions, and the Myth of the Self-Made Man.

    ERIC Educational Resources Information Center

    Catano, James V.

    1990-01-01

    Argues that the myth of the self-made man commits a fundamental error by downplaying the importance of social definition and equating masculine growth with an escape from the boundaries of origins (race, class, sex) and institutions. Discusses the myth in terms of its history and writing pedagogies employed in some classrooms. (TB)

  13. Measurement of Knock Characteristics in Spark-ignition Engines

    NASA Technical Reports Server (NTRS)

    Schutz, R

    1940-01-01

    This paper presents a discussion of three potential sources of error in recording engine knocking which are: the natural oscillation of the membrane, the shock process between test contacts, and the danger of burned contacts. Following this discussion, the paper calls attention to various results which make the bouncing-pin indicator appear fundamentally unsuitable for recording knock phenomena.

  14. Identifying the needs of elderly, hearing-impaired persons: the importance and utility of hearing aid attributes.

    PubMed

    Meister, Hartmut; Lausberg, Isabel; Kiessling, Juergen; von Wedel, Hasso; Walger, Martin

    2002-11-01

    Older patients represent the majority of hearing-aid users. The needs of elderly, hearing-impaired subjects are not entirely identified. The present study aims to determine the importance of fundamental hearing-aid attributes and to elicit the utility of associated hypothetical hearing aids for older patients. This was achieved using a questionnaire-based conjoint analysis--a decompositional approach to preference measurement offering a realistic study design. A random sample of 200 experienced hearing-aid users participated in the study. Though three out of the six examined attributes revealed age-related dependencies, the only significant effect was found for the attribute "handling", which was considerably more important for older than younger hearing-aid users. A trend of decreasing importance of speech intelligibility in noise and increasing significance of speech in quiet was observed for subjects older than 70 years. In general, the utility of various hypothetical hearing aids was similar for older and younger subjects. Apart from the attribute "handling", older and younger subjects have comparable needs regarding hearing-aid features. On the basis of the examined attributes, there is no requirement for hearing aids designed specifically for elderly hearing-aid users, provided that ergonomic features are considered and the benefits of modern technology are made fully available for older patients.

  15. Selective impairment of living things and musical instruments on a verbal 'Semantic Knowledge Questionnaire' in a case of apperceptive visual agnosia.

    PubMed

    Masullo, Carlo; Piccininni, Chiara; Quaranta, Davide; Vita, Maria Gabriella; Gaudino, Simona; Gainotti, Guido

    2012-10-01

    Semantic memory was investigated in a patient (MR) affected by a severe apperceptive visual agnosia, due to an ischemic cerebral lesion, bilaterally affecting the infero-mesial parts of the temporo-occipital cortices. The study was made by means of a Semantic Knowledge Questionnaire (Laiacona, Barbarotto, Trivelli, & Capitani, 1993), which takes separately into account four categories of living beings (animals, fruits, vegetables and body parts) and of artefacts (furniture, tools, vehicles and musical instruments), does not require a visual analysis and allows to distinguish errors concerning super-ordinate categorization, perceptual features and functional/encyclopedic knowledge. When the total number of errors obtained on all the categories of living and non-living beings was considered, a non-significant trend toward a higher number of errors in living stimuli was observed. This difference, however, became significant when body parts and musical instruments were excluded from the analysis. Furthermore, the number of errors obtained on the musical instruments was similar to that obtained on the living categories of animals, fruits and vegetables and significantly higher of that obtained in the other artefact categories. This difference was still significant when familiarity, frequency of use and prototypicality of each stimulus entered into a logistic regression analysis. On the other hand, a separate analysis of errors obtained on questions exploring super-ordinate categorization, perceptual features and functional/encyclopedic attributes showed that the differences between living and non-living stimuli and between musical instruments and other artefact categories were mainly due to errors obtained on questions exploring perceptual features. All these data are at variance with the 'domains of knowledge' hypothesis', which assumes that the breakdown of different categories of living and non-living things respects the distinction between biological entities and artefacts and support the models assuming that 'category-specific semantic disorders' are the by-product of the differential weighting that visual-perceptual and functional (or action-related) attributes have in the construction of different biological and artefacts categories. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 2; Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2004-01-01

    Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.

  17. Identifying and attributing common data quality problems: temperature and precipitation observations in Bolivia and Peru

    NASA Astrophysics Data System (ADS)

    Hunziker, Stefan; Gubler, Stefanie; Calle, Juan; Moreno, Isabel; Andrade, Marcos; Velarde, Fernando; Ticona, Laura; Carrasco, Gualberto; Castellón, Yaruska; Oria Rojas, Clara; Brönnimann, Stefan; Croci-Maspoli, Mischa; Konzelmann, Thomas; Rohrer, Mario

    2016-04-01

    Assessing climatological trends and extreme events requires high-quality data. However, for many regions of the world, observational data of the desired quality is not available. In order to eliminate errors in the data, quality control (QC) should be applied before data analysis. If the data still contains undetected errors and quality problems after QC, a consequence may be misleading and erroneous results. A region which is seriously affected by observational data quality problems is the Central Andes. At the same time, climatological information on ongoing climate change and climate risks are of utmost importance in this area due to its vulnerability to meteorological extreme events and climatic changes. Beside data quality issues, the lack of metadata and the low station network density complicate quality control and assessment, and hence, appropriate application of the data. Errors and data problems may occur at any point of the data generation chain, e.g. due to unsuitable station configuration or siting, poor station maintenance, erroneous instrument reading, or inaccurate data digitalization and post processing. Different measurement conditions in the predominantly conventional station networks in Bolivia and Peru compared to the mostly automated networks e.g. in Europe or Northern America may cause different types of errors. Hence, applying QC methods used on state of the art networks to Bolivian and Peruvian climate observations may not be suitable or sufficient. A comprehensive amount of Bolivian and Peruvian maximum and minimum temperature and precipitation in-situ measurements were analyzed to detect and describe common data quality problems. Furthermore, station visits and reviews of the original documents were done. Some of the errors could be attributed to a specific source. Such information is of great importance for data users, since it allows them to decide for what applications the data still can be used. In ideal cases, it may even allow to correct the error. Strategies on how to deal with data from the Central Andes will be suggested. However, the approach may be applicable to networks from other countries where conditions of climate observations are comparable.

  18. Pulse oximetry: fundamentals and technology update.

    PubMed

    Nitzan, Meir; Romem, Ayal; Koppel, Robert

    2014-01-01

    Oxygen saturation in the arterial blood (SaO2) provides information on the adequacy of respiratory function. SaO2 can be assessed noninvasively by pulse oximetry, which is based on photoplethysmographic pulses in two wavelengths, generally in the red and infrared regions. The calibration of the measured photoplethysmographic signals is performed empirically for each type of commercial pulse-oximeter sensor, utilizing in vitro measurement of SaO2 in extracted arterial blood by means of co-oximetry. Due to the discrepancy between the measurement of SaO2 by pulse oximetry and the invasive technique, the former is denoted as SpO2. Manufacturers of pulse oximeters generally claim an accuracy of 2%, evaluated by the standard deviation (SD) of the differences between SpO2 and SaO2, measured simultaneously in healthy subjects. However, an SD of 2% reflects an expected error of 4% (two SDs) or more in 5% of the examinations, which is in accordance with an error of 3%-4%, reported in clinical studies. This level of accuracy is sufficient for the detection of a significant decline in respiratory function in patients, and pulse oximetry has been accepted as a reliable technique for that purpose. The accuracy of SpO2 measurement is insufficient in several situations, such as critically ill patients receiving supplemental oxygen, and can be hazardous if it leads to elevated values of oxygen partial pressure in blood. In particular, preterm newborns are vulnerable to retinopathy of prematurity induced by high oxygen concentration in the blood. The low accuracy of SpO2 measurement in critically ill patients and newborns can be attributed to the empirical calibration process, which is performed on healthy volunteers. Other limitations of pulse oximetry include the presence of dyshemoglobins, which has been addressed by multiwavelength pulse oximetry, as well as low perfusion and motion artifacts that are partially rectified by sophisticated algorithms and also by reflection pulse oximetry.

  19. Lexicality Effects in Word and Nonword Recall of Semantic Dementia and Progressive Nonfluent Aphasia

    PubMed Central

    Reilly, Jamie; Troche, Joshua; Chatel, Alison; Park, Hyejin; Kalinyak-Fliszar, Michelene; Antonucci, Sharon M.; Martin, Nadine

    2012-01-01

    Background Verbal working memory is an essential component of many language functions, including sentence comprehension and word learning. As such, working memory has emerged as a domain of intense research interest both in aphasiology and in the broader field of cognitive neuroscience. The integrity of verbal working memory encoding relies on a fluid interaction between semantic and phonological processes. That is, we encode verbal detail using many cues related to both the sound and meaning of words. Lesion models can provide an effective means of parsing the contributions of phonological or semantic impairment to recall performance. Methods and Procedures We employed the lesion model approach here by contrasting the nature of lexicality errors incurred during recall of word and nonword sequences by 3individuals with progressive nonfluent aphasia (a phonological dominant impairment) compared to that of 2 individuals with semantic dementia (a semantic dominant impairment). We focused on psycholinguistic attributes of correctly recalled stimuli relative to those that elicited a lexicality error (i.e., nonword → word OR word → nonword). Outcomes and results Patients with semantic dementia showed greater sensitivity to phonological attributes (e.g., phoneme length, wordlikeness) of the target items relative to semantic attributes (e.g., familiarity). Patients with PNFA showed the opposite pattern, marked by sensitivity to word frequency, age of acquisition, familiarity, and imageability. Conclusions We interpret these results in favor of a processing strategy such that in the context of a focal phonological impairment patients revert to an over-reliance on preserved semantic processing abilities. In contrast, a focal semantic impairment forces both reliance upon and hypersensitivity to phonological attributes of target words. We relate this interpretation to previous hypotheses about the nature of verbal short-term memory in progressive aphasia. PMID:23486736

  20. Visual impairment attributable to uncorrected refractive error and other causes in the Ghanaian youth: The University of Cape Coast Survey.

    PubMed

    Abokyi, Samuel; Ilechie, Alex; Nsiah, Peter; Darko-Takyi, Charles; Abu, Emmanuel Kwasi; Osei-Akoto, Yaw Jnr; Youfegan-Baanam, Mathurin

    2016-01-01

    To determine the prevalence of visual impairment attributable to refractive error and other causes in a youthful Ghanaian population. A prospective survey of all consecutive visits by first-year tertiary students to the Optometry clinic between August, 2013 and April, 2014. Of the 4378 first-year students aged 16-39 years enumerated, 3437 (78.5%) underwent the eye examination. The examination protocol included presenting visual acuity (PVA), ocular motility, and slit-lamp examination of the external eye, anterior segment and media, and non-dilated fundus examination. Pinhole acuity and fundus examination were performed when the PVA≤6/12 in one or both eyes to determine the principal cause of the vision loss. The mean age of participants was 21.86 years (95% CI: 21.72-21.99). The prevalence of bilateral visual impairment (BVI; PVA in the better eye ≤6/12) and unilateral visual impairment UVI; PVA in the worse eye ≤6/12) were 3.08% (95% CI: 2.56-3.72) and 0.79% (95% CI: 0.54-1.14), respectively. Among 106 participants with BVI, refractive error (96.2%) and corneal opacity (3.8%) were the causes. Of the 27 participants with UVI, refractive error (44.4%), maculopathy (18.5%) and retinal disease (14.8%) were the major causes. There was unequal distribution of BVI in the different age groups, with those above 20 years having a lesser burden. Eye screening and provision of affordable spectacle correction to the youth could be timely to eliminate visual impairment. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  1. Visual impairment attributable to uncorrected refractive error and other causes in the Ghanaian youth: The University of Cape Coast Survey

    PubMed Central

    Abokyi, Samuel; Ilechie, Alex; Nsiah, Peter; Darko-Takyi, Charles; Abu, Emmanuel Kwasi; Osei-Akoto, Yaw Jnr; Youfegan-Baanam, Mathurin

    2015-01-01

    Purpose To determine the prevalence of visual impairment attributable to refractive error and other causes in a youthful Ghanaian population. Methods A prospective survey of all consecutive visits by first-year tertiary students to the Optometry clinic between August, 2013 and April, 2014. Of the 4378 first-year students aged 16–39 years enumerated, 3437 (78.5%) underwent the eye examination. The examination protocol included presenting visual acuity (PVA), ocular motility, and slit-lamp examination of the external eye, anterior segment and media, and non-dilated fundus examination. Pinhole acuity and fundus examination were performed when the PVA ≤ 6/12 in one or both eyes to determine the principal cause of the vision loss. Results The mean age of participants was 21.86 years (95% CI: 21.72–21.99). The prevalence of bilateral visual impairment (BVI; PVA in the better eye ≤6/12) and unilateral visual impairment UVI; PVA in the worse eye ≤6/12) were 3.08% (95% CI: 2.56–3.72) and 0.79% (95% CI: 0.54–1.14), respectively. Among 106 participants with BVI, refractive error (96.2%) and corneal opacity (3.8%) were the causes. Of the 27 participants with UVI, refractive error (44.4%), maculopathy (18.5%) and retinal disease (14.8%) were the major causes. There was unequal distribution of BVI in the different age groups, with those above 20 years having a lesser burden. Conclusion Eye screening and provision of affordable spectacle correction to the youth could be timely to eliminate visual impairment. PMID:26025809

  2. Mitigating Photon Jitter in Optical PPM Communication

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2008-01-01

    A theoretical analysis of photon-arrival jitter in an optical pulse-position-modulation (PPM) communication channel has been performed, and now constitutes the basis of a methodology for designing receivers to compensate so that errors attributable to photon-arrival jitter would be minimized or nearly minimized. Photon-arrival jitter is an uncertainty in the estimated time of arrival of a photon relative to the boundaries of a PPM time slot. Photon-arrival jitter is attributable to two main causes: (1) receiver synchronization error [error in the receiver operation of partitioning time into PPM slots] and (2) random delay between the time of arrival of a photon at a detector and the generation, by the detector circuitry, of a pulse in response to the photon. For channels with sufficiently long time slots, photon-arrival jitter is negligible. However, as durations of PPM time slots are reduced in efforts to increase throughputs of optical PPM communication channels, photon-arrival jitter becomes a significant source of error, leading to significant degradation of performance if not taken into account in design. For the purpose of the analysis, a receiver was assumed to operate in a photon- starved regime, in which photon counts follow a Poisson distribution. The analysis included derivation of exact equations for symbol likelihoods in the presence of photon-arrival jitter. These equations describe what is well known in the art as a matched filter for a channel containing Gaussian noise. These equations would yield an optimum receiver if they could be implemented in practice. Because the exact equations may be too complex to implement in practice, approximations that would yield suboptimal receivers were also derived.

  3. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  4. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  5. Structural interpretation in composite systems using powder X-ray diffraction: applications of error propagation to the pair distribution function.

    PubMed

    Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D

    2010-12-01

    To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.

  6. Establishment of one-axis vibration test system for measurement of biodynamic response of human hand-arm system.

    PubMed

    Shibata, Nobuyuki; Hosoya, Naoki; Maeda, Setsuo

    2008-12-01

    Prolonged exposure to hand-arm vibration (HAV) due to use of hand-held power tools leads to an increased occurrence of symptoms of disorders in the vascular, neurological, and osteo-articular systems of the upper limbs called hand-arm vibration syndrome (HAVS). Biodynamic responses of the hand-arm system to vibration can be suggestive parameters that give us better assessment of exposure to HAV and fundamental data for design of low-vibration-exposure power tools. Recently, a single axis hand-arm vibration system has been installed in the Japan National Institute of Occupational Safety and Health (NIOSH). The aims of this study were to obtain the fundamental dynamic characteristics of an instrumented handle and to validate the performance and measurement accuracy of the system applied to dynamic response measurement. A pseudo-random vibration signal with a frequency range of 5-1,250 Hz and a power spectrum density of 1.0 (m/s2)2/Hz was used in this study. First the dynamic response of the instrumented handle without any weight was measured. After this measurement, the dynamic response measurement of the handle with weights mounted on the handle was performed. The apparent mass of a weight itself was obtained by using the mass cancellation method. The mass of the measuring cap on the instrumented handle was well compensated by using the mass cancellation method. Based on the 10% error tolerance, this handle can reliably measure the dynamic response represented by an apparent mass with a minimum weight of 2.0 g in a frequency range of 10.0 to 1,000 Hz. A marked increase in the AM magnitude of the weights of 15 g and 20 g in frequency ranges greater than 800 Hz is attributed not to the fundamental resonance frequency of the handle with weights, but to the fixation of the weight to the measuring cap. In this aspect, the peak of the AM magnitude can be reduced and hence should not be an obstacle to the biodynamic response measurement of the human hand-arm system. On the basis of the results obtained in this study, we conclude that this hand-arm vibration test system can be used to measure biodynamic response parameters of the human hand-arm system.

  7. Does Unit Analysis Help Students Construct Equations?

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    Previous research has shown that students construct equations for word problems in which many of the terms have no referents. Experiment 1 attempted to eliminate some of these errors by providing instruction on canceling units. The failure of this method was attributed to the cognitive overload (Sweller, 2003) imposed by adding units to the…

  8. A Multi-Modal Active Learning Experience for Teaching Social Categorization

    ERIC Educational Resources Information Center

    Schwarzmueller, April

    2011-01-01

    This article details a multi-modal active learning experience to help students understand elements of social categorization. Each student in a group dynamics course observed two groups in conflict and identified examples of in-group bias, double-standard thinking, out-group homogeneity bias, law of small numbers, group attribution error, ultimate…

  9. ACCOUNTING FOR ERROR PROPAGATION IN THE DEVELOPMENT OF A LEAF AREA INDEX (LAI) REFERENCE MAP TO ASSESS THE MODIS LAI MODI5A LAI PRODUCT

    EPA Science Inventory

    The ability to effectively use remotely sensed data for environmental spatial analysis is dependent on understanding the underlying procedures and associated variances attributed to the data processing and image analysis technique. Equally important, also, is understanding the er...

  10. On the Equivalence of Constructed-Response and Multiple-Choice Tests.

    ERIC Educational Resources Information Center

    Traub, Ross E.; Fisher, Charles W.

    Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with indentical content but different formats measure the same attribute, except for possible differences in error variance…

  11. The Connection between Teaching Methods and Attribution Errors

    ERIC Educational Resources Information Center

    Wieman, Carl; Welsh, Ashley

    2016-01-01

    We collected data at a large, very selective public university on what math and science instructors felt was the biggest barrier to their students' learning. We also determined the extent of each instructor's use of research-based effective teaching methods. Instructors using fewer effective methods were more likely to say the greatest barrier to…

  12. Isolating Component Processes of Posterror Slowing with the Psychological Refractory Period Paradigm

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Ernst, Benjamin; Ibald, Kevin W.

    2017-01-01

    Posterror slowing (PES) refers to an increased response time following errors. While PES has traditionally been attributed to control adjustments, recent evidence suggested that PES reflects interference. The present study investigated the hypothesis that control and interference represent 2 components of PES that differ with respect to their time…

  13. Is There a Lexical Bias Effect in Comprehension Monitoring?

    ERIC Educational Resources Information Center

    Severens, Els; Hartsuiker, Robert J.

    2009-01-01

    Event-related potentials were used to investigate if there is a lexical bias effect in comprehension monitoring. The lexical bias effect in language production (the tendency of phonological errors to result in existing words rather than nonwords) has been attributed to an internal self-monitoring system, which uses the comprehension system, and…

  14. The minimal local-asperity hypothesis of early retinal lateral inhibition.

    PubMed

    Balboa, R M; Grzywacz, N M

    2000-07-01

    Recently we found that the theories related to information theory existent in the literature cannot explain the behavior of the extent of the lateral inhibition mediated by retinal horizontal cells as a function of background light intensity. These theories can explain the fall of the extent from intermediate to high intensities, but not its rise from dim to intermediate intensities. We propose an alternate hypothesis that accounts for the extent's bell-shape behavior. This hypothesis proposes that the lateral-inhibition adaptation in the early retina is part of a system to extract several image attributes, such as occlusion borders and contrast. To do so, this system would use prior probabilistic knowledge about the biological processing and relevant statistics in natural images. A key novel statistic used here is the probability of the presence of an occlusion border as a function of local contrast. Using this probabilistic knowledge, the retina would optimize the spatial profile of lateral inhibition to minimize attribute-extraction error. The two significant errors that this minimization process must reduce are due to the quantal noise in photoreceptors and the straddling of occlusion borders by lateral inhibition.

  15. Credit assignment between body and object probed by an object transportation task.

    PubMed

    Kong, Gaiqing; Zhou, Zhihao; Wang, Qining; Kording, Konrad; Wei, Kunlin

    2017-10-17

    It has been proposed that learning from movement errors involves a credit assignment problem: did I misestimate properties of the object or those of my body? For example, an overestimate of arm strength and an underestimate of the weight of a coffee cup can both lead to coffee spills. Though previous studies have found signs of simultaneous learning of the object and of the body during object manipulation, there is little behavioral evidence about their quantitative relation. Here we employed a novel weight-transportation task, in which participants lift the first cup filled with liquid while assessing their learning from errors. Specifically, we examined their transfer of learning when switching to a contralateral hand, the second identical cup, or switching both hands and cups. By comparing these transfer behaviors, we found that 25% of the learning was attributed to the object (simply because of the use of the same cup) and 58% of the learning was attributed to the body (simply because of the use of the same hand). The nervous system thus seems to partition the learning of object manipulation between the object and the body.

  16. An evaluation of the underlying mechanisms of bloodstain pattern analysis error.

    PubMed

    Behrooz, Nima; Hulse-Smith, Lee; Chandra, Sanjeev

    2011-09-01

    An experiment was designed to explore the underlying mechanisms of blood disintegration and its subsequent effect on area of origin (AO) calculations. Blood spatter patterns were created through the controlled application of pressurized air (20-80 kPa) for 0.1 msec onto suspended blood droplets (2.7-3.2 mm diameter). The resulting disintegration process was captured using high-speed photography. Straight-line triangulation resulted in a 50% height overestimation, whereas using the lowest calculated height for each spatter pattern reduced this error to 8%. Incorporation of projectile motion resulted in a 28% height underestimation. The AO xy-coordinate was found to be very accurate with a maximum offset of only 4 mm, while AO size calculations were found to be two- to fivefold greater than expected. Subsequently, reverse triangulation analysis revealed the rotational offset for 26% of stains could not be attributed to measurement error, suggesting that some portion of error is inherent in the disintegration process. © 2011 American Academy of Forensic Sciences.

  17. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  18. Spatial serial order processing in schizophrenia.

    PubMed

    Fraser, David; Park, Sohee; Clark, Gina; Yohanna, Daniel; Houk, James C

    2004-10-01

    The aim of this study was to examine serial order processing deficits in 21 schizophrenia patients and 16 age- and education-matched healthy controls. In a spatial serial order working memory task, one to four spatial targets were presented in a randomized sequence. Subjects were required to remember the locations and the order in which the targets were presented. Patients showed a marked deficit in ability to remember the sequences compared with controls. Increasing the number of targets within a sequence resulted in poorer memory performance for both control and schizophrenia subjects, but the effect was much more pronounced in the patients. Targets presented at the end of a long sequence were more vulnerable to memory error in schizophrenia patients. Performance deficits were not attributable to motor errors, but to errors in target choice. The results support the idea that the memory errors seen in schizophrenia patients may be due to saturating the working memory network at relatively low levels of memory load.

  19. Bathymetric surveying with GPS and heave, pitch, and roll compensation

    USGS Publications Warehouse

    Work, P.A.; Hansen, M.; Rogers, W.E.

    1998-01-01

    Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.

  20. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  1. Elucidating the underlying components of food valuation in the human orbitofrontal cortex.

    PubMed

    Suzuki, Shinsuke; Cross, Logan; O'Doherty, John P

    2017-12-01

    The valuation of food is a fundamental component of our decision-making. Yet little is known about how value signals for food and other rewards are constructed by the brain. Using a food-based decision task in human participants, we found that subjective values can be predicted from beliefs about constituent nutritive attributes of food: protein, fat, carbohydrates and vitamin content. Multivariate analyses of functional MRI data demonstrated that, while food value is represented in patterns of neural activity in both medial and lateral parts of the orbitofrontal cortex (OFC), only the lateral OFC represents the elemental nutritive attributes. Effective connectivity analyses further indicate that information about the nutritive attributes represented in the lateral OFC is integrated within the medial OFC to compute an overall value. These findings provide a mechanistic account for the construction of food value from its constituent nutrients.

  2. Insight into biases and sequencing errors for amplicon sequencing with the Illumina MiSeq platform.

    PubMed

    Schirmer, Melanie; Ijaz, Umer Z; D'Amore, Rosalinda; Hall, Neil; Sloan, William T; Quince, Christopher

    2015-03-31

    With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Global Precipitation Measurement (GPM) Ground Validation: Plans and Preparations

    NASA Technical Reports Server (NTRS)

    Schwaller, M.; Bidwell, S.; Durning, F. J.; Smith, E.

    2004-01-01

    The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meteorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept, the planning, and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays an important role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper outlines GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial p d temporal structure of the error and plans for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. This paper discusses NASA locations for GV measurements as well as anticipated locations from international GPM partners. NASA's primary locations for validation measurements are an oceanic site at Kwajalein Atoll in the Republic of the Marshall Islands and a continental site in north-central Oklahoma at the U.S. Department of Energy's Atmospheric Radiation Measurement Program site.

  4. Preparations for Global Precipitation Measurement(GPM)Ground Validation

    NASA Technical Reports Server (NTRS)

    Bidwell, S. W.; Bibyk, I. K.; Duming, J. F.; Everett, D. F.; Smith, E. A.; Wolff, D. B.

    2004-01-01

    The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meterorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays a critical role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper describes GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial and temporal structure of the error. This paper describes the GPM program for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. GPM will ensure that information gained through Ground Validation is applied to future improvements in the spaceborne retrieval algorithms. This paper discusses the potential locations for validation measurement and research, the anticipated contributions of GPM's international partners, and the interaction of Ground Validation with other GPM program elements.

  5. Satellite Sampling and Retrieval Errors in Regional Monthly Rain Estimates from TMI AMSR-E, SSM/I, AMSU-B and the TRMM PR

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Wolff, David B.

    2010-01-01

    Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.

  6. a Generic Probabilistic Model and a Hierarchical Solution for Sensor Localization in Noisy and Restricted Conditions

    NASA Astrophysics Data System (ADS)

    Ji, S.; Yuan, X.

    2016-06-01

    A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.

  7. Report on Automated Semantic Analysis of Scientific and Engineering Codes

    NASA Technical Reports Server (NTRS)

    Stewart. Maark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.

  8. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  9. [The metaphysical dimension of animal ethics].

    PubMed

    Walz, Norbert

    2008-01-01

    Utilitarian ethics recognises animals as moral objects, but it does not attribute an absolute value to human or non-human individuals. Animal ethics according to Regan defines the non-human individual as an inherent value, but concedes that humans should be given precedence over animals if a situation involves a decision between life and death. Such life and death decisions relate to the fundamental structures of biological nature. To individuals these fundamental structures (the paradox of life and death) will necessarily appear absurd. The metaphysical dimension of animal ethics tries to shed light on the connections between life and death, body and mind that underly ethical discussions and searches for alternatives to the natural organisation of life.

  10. Insufficient Hartree–Fock Exchange in Hybrid DFT Functionals Produces Bent Alkynyl Radical Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyeyemi, Victor B.; Keith, John A.; Pavone, Michele

    2012-01-11

    Density functional theory (DFT) is often used to determine the electronic and geometric structures of molecules. While studying alkynyl radicals, we discovered that DFT exchange-correlation (XC) functionals containing less than ~22% Hartree–Fock (HF) exchange led to qualitatively different structures than those predicted from ab initio HF and post-HF calculations or DFT XCs containing 25% or more HF exchange. We attribute this discrepancy to rehybridization at the radical center due to electron delocalization across the triple bonds of the alkynyl groups, which itself is an artifact of self-interaction and delocalization errors. Inclusion of sufficient exact exchange reduces these errors and suppressesmore » this erroneous delocalization; we find that a threshold amount is needed for accurate structure determinations. Finally, below this threshold, significant errors in predicted alkyne thermochemistry emerge as a consequence.« less

  11. Educational agenda for diagnostic error reduction

    PubMed Central

    Trowbridge, Robert L; Dhaliwal, Gurpreet; Cosby, Karen S

    2013-01-01

    Diagnostic errors are a major patient safety concern. Although the majority of diagnostic errors are partially attributable to cognitive mistakes, the most effective means of improving clinician cognition in order to achieve gains in diagnostic reliability are unclear. We propose a tripartite educational agenda for improving diagnostic performance among students, residents and practising physicians. This agenda includes strengthening the metacognitive abilities of clinicians, fostering intuitive reasoning and increasing awareness of the role of systems in the diagnostic process. The evidence supporting initiatives in each of these realms is reviewed and a course of future implementation and study is proposed. The barriers to designing and implementing this agenda are substantial and include limited evidence supporting these initiatives and the challenges of changing the practice patterns of practising physicians. Implementation will need to be accompanied by rigorous evaluation. PMID:23764435

  12. Feature-binding errors after eye movements and shifts of attention.

    PubMed

    Golomb, Julie D; L'heureux, Zara E; Kanwisher, Nancy

    2014-05-01

    When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades.

  13. Density-matrix simulation of small surface codes under current and projected experimental noise

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.

    2017-09-01

    We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.

  14. Cerebellar and prefrontal cortex contributions to adaptation, strategies, and reinforcement learning.

    PubMed

    Taylor, Jordan A; Ivry, Richard B

    2014-01-01

    Traditionally, motor learning has been studied as an implicit learning process, one in which movement errors are used to improve performance in a continuous, gradual manner. The cerebellum figures prominently in this literature given well-established ideas about the role of this system in error-based learning and the production of automatized skills. Recent developments have brought into focus the relevance of multiple learning mechanisms for sensorimotor learning. These include processes involving repetition, reinforcement learning, and strategy utilization. We examine these developments, considering their implications for understanding cerebellar function and how this structure interacts with other neural systems to support motor learning. Converging lines of evidence from behavioral, computational, and neuropsychological studies suggest a fundamental distinction between processes that use error information to improve action execution or action selection. While the cerebellum is clearly linked to the former, its role in the latter remains an open question. © 2014 Elsevier B.V. All rights reserved.

  15. Cerebellar and Prefrontal Cortex Contributions to Adaptation, Strategies, and Reinforcement Learning

    PubMed Central

    Taylor, Jordan A.; Ivry, Richard B.

    2014-01-01

    Traditionally, motor learning has been studied as an implicit learning process, one in which movement errors are used to improve performance in a continuous, gradual manner. The cerebellum figures prominently in this literature given well-established ideas about the role of this system in error-based learning and the production of automatized skills. Recent developments have brought into focus the relevance of multiple learning mechanisms for sensorimotor learning. These include processes involving repetition, reinforcement learning, and strategy utilization. We examine these developments, considering their implications for understanding cerebellar function and how this structure interacts with other neural systems to support motor learning. Converging lines of evidence from behavioral, computational, and neuropsychological studies suggest a fundamental distinction between processes that use error information to improve action execution or action selection. While the cerebellum is clearly linked to the former, its role in the latter remains an open question. PMID:24916295

  16. Optimal joint measurements of complementary observables by a single trapped ion

    NASA Astrophysics Data System (ADS)

    Xiong, T. P.; Yan, L. L.; Ma, Z. H.; Zhou, F.; Chen, L.; Yang, W. L.; Feng, M.; Busch, P.

    2017-06-01

    The uncertainty relations, pioneered by Werner Heisenberg nearly 90 years ago, set a fundamental limitation on the joint measurability of complementary observables. This limitation has long been a subject of debate, which has been reignited recently due to new proposed forms of measurement uncertainty relations. The present work is associated with a new error trade-off relation for compatible observables approximating two incompatible observables, in keeping with the spirit of Heisenberg’s original ideas of 1927. We report the first direct test and confirmation of the tight bounds prescribed by such an error trade-off relation, based on an experimental realisation of optimal joint measurements of complementary observables using a single ultracold {}40{{{Ca}}}+ ion trapped in a harmonic potential. Our work provides a prototypical determination of ultimate joint measurement error bounds with potential applications in quantum information science for high-precision measurement and information security.

  17. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    PubMed

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  18. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    NASA Astrophysics Data System (ADS)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.

  19. Pilot Age and Error in Air-Taxi Crashes

    PubMed Central

    Rebok, George W.; Qiang, Yandong; Baker, Susan P.; Li, Guohua

    2010-01-01

    Introduction The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air-taxi crashes. Methods Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Results Of the 1751 air-taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air-taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Conclusions Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air-taxi crashes. Lack of age-related differences in pilot error may be attributable to the “safe worker effect.” PMID:19601508

  20. Effects of minute misregistrations of prefabricated markers for image-guided dental implant surgery: an analytical evaluation.

    PubMed

    Rußig, Lorenz L; Schulze, Ralf K W

    2013-12-01

    The goal of the present study was to develop a theoretical analysis of errors in implant position, which can occur owing to minute registration errors of a reference marker in a cone beam computed tomography volume when inserting an implant with a surgical stent. A virtual dental-arch model was created using anatomic data derived from the literature. Basic trigonometry was used to compute effects of defined minute registration errors of only voxel size. The errors occurring at the implant's neck and apex both in horizontal as in vertical direction were computed for mean ±95%-confidence intervals of jaw width and length and typical implant lengths (8, 10 and 12 mm). Largest errors occur in vertical direction for larger voxel sizes and for greater arch dimensions. For a 10 mm implant in the frontal region, these can amount to a mean of 0.716 mm (range: 0.201-1.533 mm). Horizontal errors at the neck are negligible, with a mean overall deviation of 0.009 mm (range: 0.001-0.034 mm). Errors increase with distance to the registration marker and voxel size and are affected by implant length. Our study shows that minute and realistic errors occurring in the automated registration of a reference object have an impact on the implant's position and angulation. These errors occur in the fundamental initial step in the long planning chain; thus, they are critical and should be made aware to users of these systems. © 2012 John Wiley & Sons A/S.

  1. Exploring Reticence in Research Methods: The Experience of Studying Psychological Research Methods in Higher Education

    ERIC Educational Resources Information Center

    Kingsley, Barbara E.; Robertson, Julia M.

    2017-01-01

    As a fundamental element of any psychology degree, the teaching and learning of research methods is repeatedly brought into sharp focus, and it is often regarded as a real challenge by undergraduate students. The reasons for this are complex, but frequently attributed to an aversion of maths. To gain a more detailed understanding of students'…

  2. Enhancing Collaborative Peer-to-Peer Systems Using Resource Aggregation and Caching: A Multi-Attribute Resource and Query Aware Approach

    ERIC Educational Resources Information Center

    Bandara, H. M. N. Dilum

    2012-01-01

    Resource-rich computing devices, decreasing communication costs, and Web 2.0 technologies are fundamentally changing the way distributed applications communicate and collaborate. With these changes, we envision Peer-to-Peer (P2P) systems that will allow for the integration and collaboration of peers with diverse capabilities to a virtual community…

  3. The method of fundamental solutions for computing acoustic interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Kleefeld, Andreas; Pieronek, Lukas

    2018-03-01

    We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.

  4. Astrophysical properties of star clusters in the Magellanic Clouds homogeneously estimated by ASteCA

    NASA Astrophysics Data System (ADS)

    Perren, G. I.; Piatti, A. E.; Vázquez, R. A.

    2017-06-01

    Aims: We seek to produce a homogeneous catalog of astrophysical parameters of 239 resolved star clusters, located in the Small and Large Magellanic Clouds, observed in the Washington photometric system. Methods: The cluster sample was processed with the recently introduced Automated Stellar Cluster Analysis (ASteCA) package, which ensures both an automatized and a fully reproducible treatment, together with a statistically based analysis of their fundamental parameters and associated uncertainties. The fundamental parameters determined for each cluster with this tool, via a color-magnitude diagram (CMD) analysis, are metallicity, age, reddening, distance modulus, and total mass. Results: We generated a homogeneous catalog of structural and fundamental parameters for the studied cluster sample and performed a detailed internal error analysis along with a thorough comparison with values taken from 26 published articles. We studied the distribution of cluster fundamental parameters in both Clouds and obtained their age-metallicity relationships. Conclusions: The ASteCA package can be applied to an unsupervised determination of fundamental cluster parameters, which is a task of increasing relevance as more data becomes available through upcoming surveys. A table with the estimated fundamental parameters for the 239 clusters analyzed is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/602/A89

  5. The use of a contextual, modal and psychological classification of medication errors in the emergency department: a retrospective descriptive study.

    PubMed

    Cabilan, C J; Hughes, James A; Shannon, Carl

    2017-12-01

    To describe the contextual, modal and psychological classification of medication errors in the emergency department to know the factors associated with the reported medication errors. The causes of medication errors are unique in every clinical setting; hence, error minimisation strategies are not always effective. For this reason, it is fundamental to understand the causes specific to the emergency department so that targeted strategies can be implemented. Retrospective analysis of reported medication errors in the emergency department. All voluntarily staff-reported medication-related incidents from 2010-2015 from the hospital's electronic incident management system were retrieved for analysis. Contextual classification involved the time, place and the type of medications involved. Modal classification pertained to the stage and issue (e.g. wrong medication, wrong patient). Psychological classification categorised the errors in planning (knowledge-based and rule-based errors) and skill (slips and lapses). There were 405 errors reported. Most errors occurred in the acute care area, short-stay unit and resuscitation area, during the busiest shifts (0800-1559, 1600-2259). Half of the errors involved high-alert medications. Many of the errors occurred during administration (62·7%), prescribing (28·6%) and commonly during both stages (18·5%). Wrong dose, wrong medication and omission were the issues that dominated. Knowledge-based errors characterised the errors that occurred in prescribing and administration. The highest proportion of slips (79·5%) and lapses (76·1%) occurred during medication administration. It is likely that some of the errors occurred due to the lack of adherence to safety protocols. Technology such as computerised prescribing, barcode medication administration and reminder systems could potentially decrease the medication errors in the emergency department. There was a possibility that some of the errors could be prevented if safety protocols were adhered to, which highlights the need to also address clinicians' attitudes towards safety. Technology can be implemented to help minimise errors in the ED, but this must be coupled with efforts to enhance the culture of safety. © 2017 John Wiley & Sons Ltd.

  6. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  7. Online Tools for Uncovering Data Quality (DQ) Issues in Satellite-Based Global Precipitation Products

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Heo, Gil

    2015-01-01

    Data quality (DQ) has many attributes or facets (i.e., errors, biases, systematic differences, uncertainties, benchmark, false trends, false alarm ratio, etc.)Sources can be complicated (measurements, environmental conditions, surface types, algorithms, etc.) and difficult to be identified especially for multi-sensor and multi-satellite products with bias correction (TMPA, IMERG, etc.) How to obtain DQ info fast and easily, especially quantified info in ROI Existing parameters (random error), literature, DIY, etc.How to apply the knowledge in research and applications.Here, we focus on online systems for integration of products and parameters, visualization and analysis as well as investigation and extraction of DQ information.

  8. 'Systemic Failures' and 'Human Error' in Canadian TSB Aviation Reports Between 1996 and 2002

    NASA Technical Reports Server (NTRS)

    Holloway, C. M.; Johnson, C. W.

    2004-01-01

    This paper describes the results of an independent analysis of the primary and contributory causes of aviation accidents in Canada between 1996 and 2003. The purpose of the study was to assess the comparative frequency of a range of causal factors in the reporting of these adverse events. Our results suggest that the majority of these high consequence accidents were attributed to human error. A large number of reports also mentioned wider systemic issues, including the managerial and regulatory context of aviation operations. These issues are more likely to appear as contributory rather than primary causes in this set of accident reports.

  9. Lexical morphology and its role in the writing process: evidence from a case of acquired dysgraphia.

    PubMed

    Badecker, W; Hillis, A; Caramazza, A

    1990-06-01

    A case of acquired dysgraphia is presented in which the deficit is attributed to an impairment at the level of the Graphemic Output Buffer. It is argued that this patient's performance can be used to identify the representational character of the processing units that are stored in the Orthographic Output Lexicon. In particular, it is argued that the distribution of spelling errors and the types of lexical items which affect error rates indicate that the lexical representations passed from the lexical output system to the Graphemic Output Buffer correspond to the productive morphemes of the language.

  10. Underestimates of sensible heat flux due to vertical velocity measurement errors in non-orthogonal sonic anemometers

    Treesearch

    John M. Frank; William J. Massman; Brent E. Ewers

    2013-01-01

    Sonic thermometry and anemometry are fundamental to all eddy-covariance studies of surface energy balance. Recent studies have suggested that sonic anemometers with non-orthogonal transducers can underestimate vertical wind velocity (w) and sensible heat flux (H) when compared to orthogonal designs. In this study we tested whether a non-orthogonal sonic anemometer (...

  11. MAGNIFICENT MAGNIFICATION: EXPLOITING THE OTHER HALF OF THE LENSING SIGNAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Eric M.; Graves, Genevieve J.

    2014-01-10

    We describe a new method for measuring galaxy magnification due to weak gravitational lensing. Our method makes use of a tight scaling relation between galaxy properties that are modified by gravitational lensing, such as apparent size, and other properties that are not, such as surface brightness. In particular, we use a version of the well-known fundamental plane relation for early-type galaxies. This modified ''photometric fundamental plane'' uses only photometric galaxy properties, eliminating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. Thismore » analysis shows that the derived magnification signal is within a factor of three of that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may further enhance the lensing signal-to-noise available with this method. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. With this new technique, magnification becomes a useful measurement tool for the coming era of large ground-based surveys intending to measure gravitational lensing.« less

  12. Evolution of Altimetry Calibration and Future Challenges

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Haines, Bruce J.

    2012-01-01

    Over the past 20 years, altimetry calibration has evolved from an engineering-oriented exercise to a multidisciplinary endeavor driving the state of the art. This evolution has been spurred by the developing promise of altimetry to capture the large-scale, but small-amplitude, changes of the ocean surface containing the expression of climate change. The scope of altimeter calibration/validation programs has expanded commensurately. Early efforts focused on determining a constant range bias and verifying basic compliance of the data products with mission requirements. Contemporary investigations capture, with increasing accuracies, the spatial and temporal characteristics of errors in all elements of the measurement system. Dedicated calibration sites still provide the fundamental service of estimating absolute bias, but also enable long-term monitoring of the sea-surface height and constituent measurements. The use of a network of island and coastal tide gauges has provided the best perspective on the measurement stability, and revealed temporal variations of altimeter measurement system drift. The cross-calibration between successive missions provided fundamentally new information on the performance of altimetry systems. Spatially and temporally correlated errors pose challenges for future missions, underscoring the importance of cross-calibration of new measurements against the established record.

  13. Fundamental frequency estimation of singing voice

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain; Henrich, Nathalie

    2002-05-01

    A method of fundamental frequency (F0) estimation recently developped for speech [de Cheveigné and Kawahara, J. Acoust. Soc. Am. (to be published)] was applied to singing voice. An electroglottograph signal recorded together with the microphone provided a reference by which estimates could be validated. Using standard parameter settings as for speech, error rates were low despite the wide range of F0s (about 100 to 1600 Hz). Most ``errors'' were due to irregular vibration of the vocal folds, a sharp formant resonance that reduced the waveform to a single harmonic, or fast F0 changes such as in high-amplitude vibrato. Our database (18 singers from baritone to soprano) included examples of diphonic singing for which melody is carried by variations of the frequency of a narrow formant rather than F0. Varying a parameter (ratio of inharmonic to total power) the algorithm could be tuned to follow either frequency. Although the method has not been formally tested on a wide range of instruments, it seems appropriate for musical applications because it is accurate, accepts a wide range of F0s, and can be implemented with low latency for interactive applications. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  14. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    PubMed

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  15. Estimating Building Age with 3d GIS

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Sindram, M.

    2017-10-01

    Building datasets (e.g. footprints in OpenStreetMap and 3D city models) are becoming increasingly available worldwide. However, the thematic (attribute) aspect is not always given attention, as many of such datasets are lacking in completeness of attributes. A prominent attribute of buildings is the year of construction, which is useful for some applications, but its availability may be scarce. This paper explores the potential of estimating the year of construction (or age) of buildings from other attributes using random forest regression. The developed method has a two-fold benefit: enriching datasets and quality control (verification of existing attributes). Experiments are carried out on a semantically rich LOD1 dataset of Rotterdam in the Netherlands using 9 attributes. The results are mixed: the accuracy in the estimation of building age depends on the available information used in the regression model. In the best scenario we have achieved predictions with an RMSE of 11 years, but in more realistic situations with limited knowledge about buildings the error is much larger (RMSE = 26 years). Hence the main conclusion of the paper is that inferring building age with 3D city models is possible to a certain extent because it reveals the approximate period of construction, but precise estimations remain a difficult task.

  16. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection

    PubMed Central

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709

  17. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection.

    PubMed

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.

  18. Synchrotron far-infrared spectroscopy of the two lowest fundamental modes of 1,1-difluoroethane

    NASA Astrophysics Data System (ADS)

    Wong, Andy; Thompson, Christopher D.; Appadoo, Dominique R. T.; Plathe, Ruth; Roy, Pascale; Manceron, Laurent; Barros, Joanna; McNaughton, Don

    2013-08-01

    The far-infrared (FIR) spectrum (50-600 cm-1) of 1,1-difluoroethane was recorded using the high-resolution infrared AILES beamline at the Soleil synchrotron. A ro-vibrational assignment was performed on the lowest wavenumber, low intensity 181 0 and 171 0 modes, yielding band centres of 224.241903 (10) cm-1 and 384.252538 (13) cm-1, respectively. A total of 965 and 2031 FIR transitions were assigned to the 181 0 and 171 0 fundamentals, respectively. Previously measured pure rotational transitions from the upper states were included into the respective fits to yield improved rotational and centrifugal distortion constants. The 182 1 hot band was observed within the fundamental band, with 369 FIR transitions assigned and co-fitted with the fundamental to give a band centre of 431.956502 (39) cm-1 for ν 18 = 2. The 182 0 overtone was observed with 586 transitions assigned and fitted to give a band centre of 431.952763 (23) cm-1 for ν 18 = 2. The difference in energy is attributed to a torsional splitting of 0.003740 (45) cm-1 in the ν 18 = 2 state. Two hot bands originating from the ν 18 = 1 and ν 17 = 1 states were observed within the 171 0 fundamental.

  19. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    NASA Astrophysics Data System (ADS)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  20. Seasonal variability of stratospheric methane: implications for constraining tropospheric methane budgets using total column observations

    NASA Astrophysics Data System (ADS)

    Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.

    2016-11-01

    Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.

  1. Noise management to achieve superiority in quantum information systems

    NASA Astrophysics Data System (ADS)

    Nemoto, Kae; Devitt, Simon; Munro, William J.

    2017-06-01

    Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority. This article is part of the themed issue 'Quantum technology for the 21st century'.

  2. Studies of contamination of three broiler breeder houses with Salmonella enteritidis before and after cleansing and disinfection.

    PubMed

    Davies, R H; Wray, C

    1996-01-01

    Three broiler breeder houses on three different sites were sampled before and after cleansing and disinfection. None of the farms achieved total elimination of Salmonella enteritidis from the poultry house environment but substantial improvements were seen when errors in the cleansing and disinfection protocol in the first house had been corrected. Fundamental errors such as over-dilution and inconsistent application of disinfectants were observed despite supervision of the process by technical advisors. In each of the three poultry units failure to eliminate a mouse population that was infected with S. enteritidis was likely to be the most important hazard for the next flock.

  3. Noise management to achieve superiority in quantum information systems.

    PubMed

    Nemoto, Kae; Devitt, Simon; Munro, William J

    2017-08-06

    Quantum information systems are expected to exhibit superiority compared with their classical counterparts. This superiority arises from the quantum coherences present in these quantum systems, which are obviously absent in classical ones. To exploit such quantum coherences, it is essential to control the phase information in the quantum state. The phase is analogue in nature, rather than binary. This makes quantum information technology fundamentally different from our classical digital information technology. In this paper, we analyse error sources and illustrate how these errors must be managed for the system to achieve the required fidelity and a quantum superiority.This article is part of the themed issue 'Quantum technology for the 21st century'. © 2017 The Author(s).

  4. Teamwork as an Essential Component of High-Reliability Organizations

    PubMed Central

    Baker, David P; Day, Rachel; Salas, Eduardo

    2006-01-01

    Organizations are increasingly becoming dynamic and unstable. This evolution has given rise to greater reliance on teams and increased complexity in terms of team composition, skills required, and degree of risk involved. High-reliability organizations (HROs) are those that exist in such hazardous environments where the consequences of errors are high, but the occurrence of error is extremely low. In this article, we argue that teamwork is an essential component of achieving high reliability particularly in health care organizations. We describe the fundamental characteristics of teams, review strategies in team training, demonstrate the criticality of teamwork in HROs and finally, identify specific challenges the health care community must address to improve teamwork and enhance reliability. PMID:16898980

  5. Skin-deep diagnosis: affective bias and zebra retreat complicating the diagnosis of systemic sclerosis.

    PubMed

    Miller, Chad S

    2013-01-01

    Nearly half of medical errors can be attributed to an error of clinical reasoning or decision making. It is estimated that the correct diagnosis is missed or delayed in between 5% and 14% of acute hospital admissions. Through understanding why and how physicians make these errors, it is hoped that strategies can be developed to decrease the number of these errors. In the present case, a patient presented with dyspnea, gastrointestinal symptoms and weight loss; the diagnosis was initially missed when the treating physicians took mental short cuts and used heuristics as in this case. Heuristics have an inherent bias that can lead to faulty reasoning or conclusions, especially in complex or difficult cases. Affective bias, which is the overinvolvement of emotion in clinical decision making, limited the available information for diagnosis because of the hesitancy to acquire a full history and perform a complete physical examination in this patient. Zebra retreat, another type of bias, is when a rare diagnosis figures prominently on the differential diagnosis but the physician retreats for various reasons. Zebra retreat also factored in the delayed diagnosis. Through the description of these clinical reasoning errors in an actual case, it is hoped that future errors can be prevented or inspiration for additional research in this area will develop.

  6. Study of style effects on OCR errors in the MEDLINE database

    NASA Astrophysics Data System (ADS)

    Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.

    2005-01-01

    The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.

  7. Perceived barriers to medical-error reporting: an exploratory investigation.

    PubMed

    Uribe, Claudia L; Schweikhart, Sharon B; Pathak, Dev S; Dow, Merrell; Marsh, Gail B

    2002-01-01

    Medical-error reporting is an essential component for patient safety enhancement. Unfortunately, medical errors are largely underreported across healthcare institutions. This problem can be attributed to different factors and barriers present at organizational and individual levels that ultimately prevent individuals from generating the report. This study explored the factors that affect medical-error reporting among physicians and nurses at a large academic medical center located in the midwest United States. A nominal group session was conducted to identify the most relevant factors that act as barriers for error reporting. These factors were then used to design a questionnaire that explored the likelihood of the factors to act as barriers and their likelihood to be modified. Using these two parameters, the results were analyzed and combined into a Factor Relevance Matrix. The matrix identifies the factors for which immediate actions should be undertaken to improve medical-error reporting (immediate action factors). It also identifies factors that require long-term strategies (long-term strategy factors) as well as factors that the organization should be aware of but that are of lower priority (awareness factors). The strategies outlined in this study may assist healthcare organizations in improving medical-error reporting, as part of the efforts toward patient-safety enhancement. Although factors affecting medical-error reporting may vary between different organizations, the process used in identifying the factors and the Factor Relevance Matrix developed in this study are easily adaptable to any organizational setting.

  8. Semantic, Lexical, and Phonological Influences on the Production of Verb Inflections in Agrammatic Aphasia

    ERIC Educational Resources Information Center

    Faroqi-Shah, Yasmeen; Thompson, Cynthia K.

    2004-01-01

    Verb inflection errors, often seen in agrammatic aphasic speech, have been attributed to either impaired encoding of diacritical features that specify tense and aspect, or to impaired affixation during phonological encoding. In this study we examined the effect of semantic markedness, word form frequency and affix frequency, as well as accuracy…

  9. Psychological Effects of Technological/Human-Caused Environmental Disasters: Examination of the Navajo and Uranium

    ERIC Educational Resources Information Center

    Markstrom, Carol A.; Charley, Perry H.

    2003-01-01

    Disasters can be defined as catastrophic events that challenge the normal range of human coping ability. The technological/human-caused disaster, a classification of interest in this article, is attributable to human error or misjudgment. Lower socioeconomic status and race intersect in the heightened risk for technological/human-caused disasters…

  10. Interpreting Variance Components as Evidence for Reliability and Validity.

    ERIC Educational Resources Information Center

    Kane, Michael T.

    The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…

  11. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Treesearch

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  12. Measurement Error Correction Formula for Cluster-Level Group Differences in Cluster Randomized and Observational Studies

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Preacher, Kristopher J.

    2016-01-01

    Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…

  13. On the nature of cavities on protein surfaces: application to the identification of drug-binding sites.

    PubMed

    Nayal, Murad; Honig, Barry

    2006-06-01

    In this article we introduce a new method for the identification and the accurate characterization of protein surface cavities. The method is encoded in the program SCREEN (Surface Cavity REcognition and EvaluatioN). As a first test of the utility of our approach we used SCREEN to locate and analyze the surface cavities of a nonredundant set of 99 proteins cocrystallized with drugs. We find that this set of proteins has on average about 14 distinct cavities per protein. In all cases, a drug is bound at one (and sometimes more than one) of these cavities. Using cavity size alone as a criterion for predicting drug-binding sites yields a high balanced error rate of 15.7%, with only 71.7% coverage. Here we characterize each surface cavity by computing a comprehensive set of 408 physicochemical, structural, and geometric attributes. By applying modern machine learning techniques (Random Forests) we were able to develop a classifier that can identify drug-binding cavities with a balanced error rate of 7.2% and coverage of 88.9%. Only 18 of the 408 cavity attributes had a statistically significant role in the prediction. Of these 18 important attributes, almost all involved size and shape rather than physicochemical properties of the surface cavity. The implications of these results are discussed. A SCREEN Web server is available at http://interface.bioc.columbia.edu/screen. 2006 Wiley-Liss, Inc.

  14. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  15. Design of experiments-based monitoring of critical quality attributes for the spray-drying process of insulin by NIR spectroscopy.

    PubMed

    Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger

    2012-09-01

    Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.

  16. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed directly into the permeability calculations. The application of Maximum Likelihood Estimators can have important consequences, especially when we aim to predict the tendency of fracture attributes towards smaller and larger scales than those observed, in order to build consistent, useable models from outcrop observations. The procedures presented here aim to understand whether the average permeability of a fracture network can be predicted, reducing its uncertainties; and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models, which can then be easily up-scaled into larger areas or volumes. Gale et al. "Natural Fracture in shale: A review and new observations", AAPG Bulletin 98.11 (2014). Mauldon et al. "Circular scanlines and circular windows: new tools for characterizing the geometry of fracture traces", Journal of Structural Geology, 23 (2001). Oda "Permeability tensor for discontinuous rock masses", Geotechnique 35.4 (1985).

  17. A fundamental model of quasi-static wheelchair biomechanics.

    PubMed

    Leary, M; Gruijters, J; Mazur, M; Subic, A; Burton, M; Fuss, F K

    2012-11-01

    The performance of a wheelchair system is a function of user anatomy, including arm segment lengths and muscle parameters, and wheelchair geometry, in particular, seat position relative to the wheel hub. To quantify performance, researchers have proposed a number of predictive models. In particular, the model proposed by Richter is extremely useful for providing initial analysis as it is simple to apply and provides insight into the peak and transient joint torques required to achieve a given angular velocity. The work presented in this paper identifies and corrects a critical error; specifically that the Richter model incorrectly predicts that shoulder torque is due to an anteflexing muscle moment. This identified error was confirmed analytically, graphically and numerically. The authors have developed a corrected, fundamental model which identifies that the shoulder anteflexes only in the first half of the push phase and retroflexes in the second half. The fundamental model has been extended by the authors to obtain novel data on joint and net power as a function of push progress. These outcomes indicate that shoulder power is positive in the first half of the push phase (concentrically contracting anteflexors) and negative in the second half (eccentrically contracting retroflexors). As the eccentric contraction introduces adverse negative power, these considerations are essential when optimising wheelchair design in terms of the user's musculoskeletal system. The proposed fundamental model was applied to assess the effect of vertical seat position on joint torques and power. Increasing the seat height increases the peak positive (concentric) shoulder and elbow torques while reducing the associated (eccentric) peak negative torque. Furthermore, the transition from positive to negative shoulder torque (as well as from positive to negative power) occurs later in the push phase with increasing seat height. These outcomes will aid in the optimisation of manual wheelchair propulsion biomechanics by minimising adverse negative muscle power, and allow joint torques to be manipulated as required to minimise injury or aid in rehabilitation. Copyright © 2012. Published by Elsevier Ltd.

  18. Forecasting of monsoon heavy rains: challenges in NWP

    NASA Astrophysics Data System (ADS)

    Sharma, Kuldeep; Ashrit, Raghavendra; Iyengar, Gopal; Bhatla, R.; Rajagopal, E. N.

    2016-05-01

    Last decade has seen a tremendous improvement in the forecasting skill of numerical weather prediction (NWP) models. This is attributed to increased sophistication in NWP models, which resolve complex physical processes, advanced data assimilation, increased grid resolution and satellite observations. However, prediction of heavy rains is still a challenge since the models exhibit large error in amounts as well as spatial and temporal distribution. Two state-of-art NWP models have been investigated over the Indian monsoon region to assess their ability in predicting the heavy rainfall events. The unified model operational at National Center for Medium Range Weather Forecasting (NCUM) and the unified model operational at the Australian Bureau of Meteorology (Australian Community Climate and Earth-System Simulator -- Global (ACCESS-G)) are used in this study. The recent (JJAS 2015) Indian monsoon season witnessed 6 depressions and 2 cyclonic storms which resulted in heavy rains and flooding. The CRA method of verification allows the decomposition of forecast errors in terms of error in the rainfall volume, pattern and location. The case by case study using CRA technique shows that contribution to the rainfall errors come from pattern and displacement is large while contribution due to error in predicted rainfall volume is least.

  19. Reduced discretization error in HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less

  20. Quantifying individual variation in the propensity to attribute incentive salience to reward cues.

    PubMed

    Meyer, Paul J; Lovic, Vedran; Saunders, Benjamin T; Yager, Lindsay M; Flagel, Shelly B; Morrow, Jonathan D; Robinson, Terry E

    2012-01-01

    If reward-associated cues acquire the properties of incentive stimuli they can come to powerfully control behavior, and potentially promote maladaptive behavior. Pavlovian incentive stimuli are defined as stimuli that have three fundamental properties: they are attractive, they are themselves desired, and they can spur instrumental actions. We have found, however, that there is considerable individual variation in the extent to which animals attribute Pavlovian incentive motivational properties ("incentive salience") to reward cues. The purpose of this paper was to develop criteria for identifying and classifying individuals based on their propensity to attribute incentive salience to reward cues. To do this, we conducted a meta-analysis of a large sample of rats (N = 1,878) subjected to a classic Pavlovian conditioning procedure. We then used the propensity of animals to approach a cue predictive of reward (one index of the extent to which the cue was attributed with incentive salience), to characterize two behavioral phenotypes in this population: animals that approached the cue ("sign-trackers") vs. others that approached the location of reward delivery ("goal-trackers"). This variation in Pavlovian approach behavior predicted other behavioral indices of the propensity to attribute incentive salience to reward cues. Thus, the procedures reported here should be useful for making comparisons across studies and for assessing individual variation in incentive salience attribution in small samples of the population, or even for classifying single animals.

  1. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  2. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  3. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  4. Occupational safety management: the role of causal attribution.

    PubMed

    Gyekye, Seth Ayim

    2010-12-01

    The paper addresses the causal attribution theory, an old and well-established theme in social psychology which denotes the everyday, commonsense explanations that people use to explain events and the world around them. The attribution paradigm is considered one of the most appropriate analytical tools for exploratory and descriptive studies in social psychology and organizational literature. It affords the possibility of describing accident processes as objectively as possible and with as much detail as possible. Causal explanations are vital to the formal analysis of workplace hazards and accidents, as they determine how organizations act to prevent accident recurrence. Accordingly, they are regarded as fundamental and prerequisite elements for safety management policies. The paper focuses primarily on the role of causal attributions in occupational and industrial accident analyses and implementation of safety interventions. It thus serves as a review of the contribution of attribution theory to occupational and industrial accidents. It comprises six sections. The first section presents an introduction to the classic attribution theories, and the second an account of the various ways in which the attribution paradigm has been applied in organizational settings. The third and fourth sections review the literature on causal attributions and demographic and organizational variables respectively. The sources of attributional biases in social psychology and how they manifest and are identified in the causal explanations for industrial and occupational accidents are treated in the fifth section. Finally, conclusion and recommendations are presented. The recommendations are particularly important for the reduction of workplace accidents and associated costs. The paper touches on the need for unbiased causal analyses, belief in the preventability of accidents, and the imperative role of management in occupational safety management.

  5. Conceptualising computerized adaptive testing for measurement of latent variables associated with physical objects

    NASA Astrophysics Data System (ADS)

    Camargo, F. R.; Henson, B.

    2015-02-01

    The notion of that more or less of a physical feature affects in different degrees the users' impression with regard to an underlying attribute of a product has frequently been applied in affective engineering. However, those attributes exist only as a premise that cannot directly be measured and, therefore, inferences based on their assessment are error-prone. To establish and improve measurement of latent attributes it is presented in this paper the concept of a stochastic framework using the Rasch model for a wide range of independent variables referred to as an item bank. Based on an item bank, computerized adaptive testing (CAT) can be developed. A CAT system can converge into a sequence of items bracketing to convey information at a user's particular endorsement level. It is through item banking and CAT that the financial benefits of using the Rasch model in affective engineering can be realised.

  6. An image understanding system using attributed symbolic representation and inexact graph-matching

    NASA Astrophysics Data System (ADS)

    Eshera, M. A.; Fu, K.-S.

    1986-09-01

    A powerful image understanding system using a semantic-syntactic representation scheme consisting of attributed relational graphs (ARGs) is proposed for the analysis of the global information content of images. A multilayer graph transducer scheme performs the extraction of ARG representations from images, with ARG nodes representing the global image features, and the relations between features represented by the attributed branches between corresponding nodes. An efficient dynamic programming technique is employed to derive the distance between two ARGs and the inexact matching of their respective components. Noise, distortion and ambiguity in real-world images are handled through modeling in the transducer mapping rules and through the appropriate cost of error-transformation for the inexact matching of the representation. The system is demonstrated for the case of locating objects in a scene composed of complex overlapped objects, and the case of target detection in noisy and distorted synthetic aperture radar image.

  7. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  8. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Undergraduate paramedic students cannot do drug calculations.

    PubMed

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.

  10. Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors

    PubMed Central

    Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.

    2013-01-01

    The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223

  11. Scaling-up Sustainable Land Management Practices through the Concept of the Rural Resource Centre: Reconciling Farmers' Interests with Research Agendas

    ERIC Educational Resources Information Center

    Takoutsing, Bertin; Tchoundjeu, Zacharie; Degrande, Ann; Asaah, Ebenezar; Tsobeng, Alain

    2014-01-01

    Purpose: Formal agricultural research has generated vast amount of knowledge and fundamental insights on land management, but their low adoption has been attributed to the use of public extension approach. This research aims to address whether and how full participation of farmers through the concept of Rural Resource Centre (RRC) provides new…

  12. Comparisons between field- and LiDAR-based measures of stand structrual complexity

    Treesearch

    Van R. Kane; Robert J. McGaughey; Jonathan D. Bakker; Rolf F. Gersonde; James A. Lutz; Jerry F. Franklin

    2010-01-01

    Forest structure, as measured by the physical arrangement of trees and their crowns, is a fundamental attribute of forest ecosystems that changes as forests progress through successional stages. We examined whether LiDAR data could be used to directly assess the successional stage of forests by determining the degree to which the LiDAR data would show the same relative...

  13. Development and Perceptual Evaluation of Amplitude-Based F0 Control in Electrolarynx Speech

    ERIC Educational Resources Information Center

    Saikachi, Yoko; Stevens, Kenneth N.; Hillman, Robert E.

    2009-01-01

    Purpose: Current electrolarynx (EL) devices produce a mechanical speech quality that has been largely attributed to the lack of natural fundamental frequency (F0) variation. In order to improve the quality of EL speech, in the present study the authors aimed to develop and evaluate an automatic F0 control scheme, in which F0 was modulated based on…

  14. Whither Space Weapons: A Capability in Need of an Advocate

    DTIC Science & Technology

    2005-05-17

    organizations may be particularly resistant. “ Disruptive technologies introduce a very different package of attributes from the one mainstream customers...and Clayton M. Christensen, “ Disruptive Technologies : Catching the Wave,” Harvard Business Review on Managing Uncertainty, (Boston, MA: Harvard...way to deter and win wars in their theater. However, if fundamentally different disruptive technologies such as space weapons are to be introduced

  15. An investigation of reports of Controlled Flight Toward Terrain (CFTT)

    NASA Technical Reports Server (NTRS)

    Porter, R. F.; Loomis, J. P.

    1981-01-01

    Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.

  16. Self-regulating proportionally controlled heating apparatus and technique

    NASA Technical Reports Server (NTRS)

    Strange, M. G. (Inventor)

    1975-01-01

    A self-regulating proportionally controlled heating apparatus and technique is provided wherein a single electrical resistance heating element having a temperature coefficient of resistance serves simultaneously as a heater and temperature sensor. The heating element is current-driven and the voltage drop across the heating element is monitored and a component extracted which is attributable to a change in actual temperature of the heating element from a desired reference temperature, so as to produce a resulting error signal. The error signal is utilized to control the level of the heater drive current and the actual heater temperature in a direction to reduce the noted temperature difference. The continuous nature of the process for deriving the error signal feedback information results in true proportional control of the heating element without the necessity for current-switching which may interfere with nearby sensitive circuits, and with no cyclical variation in the controlled temperature.

  17. An automated microphysiological assay for toxicity evaluation.

    PubMed

    Eggert, S; Alexander, F A; Wiest, J

    2015-08-01

    Screening a newly developed drug, food additive or cosmetic ingredient for toxicity is a critical preliminary step before it can move forward in the development pipeline. Due to the sometimes dire consequences when a harmful agent is overlooked, toxicologists work under strict guidelines to effectively catalogue and classify new chemical agents. Conventional assays involve long experimental hours and many manual steps that increase the probability of user error; errors that can potentially manifest as inaccurate toxicology results. Automated assays can overcome many potential mistakes that arise due to human error. In the presented work, we created and validated a novel, automated platform for a microphysiological assay that can examine cellular attributes with sensors measuring changes in cellular metabolic rate, oxygen consumption, and vitality mediated by exposure to a potentially toxic agent. The system was validated with low buffer culture medium with varied conductivities that caused changes in the measured impedance on integrated impedance electrodes.

  18. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  19. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  20. Using sediment 'fingerprints' to assess sediment-budget errors, north Halawa Valley, Oahu, Hawaii, 1991-92

    USGS Publications Warehouse

    Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.

    1998-01-01

    Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.

  1. Investigation of advanced phase-shifting projected fringe profilometry techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu

    1999-11-01

    The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.

  2. Diagnostic decision-making and strategies to improve diagnosis.

    PubMed

    Thammasitboon, Satid; Cutrer, William B

    2013-10-01

    A significant portion of diagnostic errors arises through cognitive errors resulting from inadequate knowledge, faulty data gathering, and/or faulty verification. Experts estimate that 75% of diagnostic failures can be attributed to clinician diagnostic thinking failure. The cognitive processes that underlie diagnostic thinking of clinicians are complex and intriguing, and it is imperative that clinicians acquire explicit appreciation and application of different cognitive approaches to make decisions better. A dual-process model that unifies many theories of decision-making has emerged as a promising template for understanding how clinicians think and judge efficiently in a diagnostic reasoning process. The identification and implementation of strategies for decreasing or preventing such diagnostic errors has become a growing area of interest and research. Suggested strategies to decrease diagnostic error incidence include increasing clinician's clinical expertise and avoiding inherent cognitive errors to make decisions better. Implementing Interventions focused solely on avoiding errors may work effectively for patient safety issues such as medication errors. Addressing cognitive errors, however, requires equal effort on expanding the individual clinician's expertise. Providing cognitive support to clinicians for robust diagnostic decision-making serves as the final strategic target for decreasing diagnostic errors. Clinical guidelines and algorithms offer another method for streamlining decision-making and decreasing likelihood of cognitive diagnostic errors. Addressing cognitive processing errors is undeniably the most challenging task in reducing diagnostic errors. While many suggested approaches exist, they are mostly based on theories and sciences in cognitive psychology, decision-making, and education. The proposed interventions are primarily suggestions and very few of them have been tested in the actual practice settings. Collaborative research effort is required to effectively address cognitive processing errors. Researchers in various areas, including patient safety/quality improvement, decision-making, and problem solving, must work together to make medical diagnosis more reliable. © 2013 Mosby, Inc. All rights reserved.

  3. Note: Derivation of two-photon circular dichroism—Addendum to “Two-photon circular dichroism” [J. Chem. Phys. 62, 1006 (1975)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no

    2015-09-07

    This addendum shows the detailed derivation of the fundamental equations for two-photon circular dichroism which are given in a very condensed form in the original publication [I. Tinoco, J. Chem. Phys. 62, 1006 (1975)]. In addition, some minor errors are corrected and some of the derivations in the original publication are commented.

  4. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    DTIC Science & Technology

    1979-08-01

    fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the

  5. Introduction to total- and partial-pressure measurements in vacuum systems

    NASA Technical Reports Server (NTRS)

    Outlaw, R. A.; Kern, F. A.

    1989-01-01

    An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.

  6. State-of-the-Art Assessment of Testing and Testability of Custom LSI/VLSI Circuits. Volume VI. Redundancy, Testing Circuits, and Codes.

    DTIC Science & Technology

    1982-10-01

    e.g., providing voters in TMR systems and detection-switching requirements in standby-sparing sys- tems. The application of mathematical thoery of...and time redundancy required for error detection and correction, are interrelated. Mathematical modeling, when applied to fault tolerant systems, can...9 1.1 Some Fundamental Principles............................. 11 1.2 Mathematical Theory of

  7. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  8. A theoretical basis for the analysis of redundant software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.

  9. Gaussian Hypothesis Testing and Quantum Illumination.

    PubMed

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  10. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  11. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  12. Fifth Fundamental Catalogue (FK5). Part 1: Basic fundamental stars (Fricke, Schwan, and Lederle 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.

  13. Characterization of Mid-Infrared Single Mode Fibers as Modal Filters

    NASA Technical Reports Server (NTRS)

    Ksendzov, A.; Lay, O.; Martin, S.; Sanghera, J. S.; Busse, L. E.; Kim, W. H.; Pureza, P. C.; Nguyen, V. Q.; Aggarwal, I. D.

    2007-01-01

    We present a technique for measuring the modal filtering ability of single mode fibers. The ideal modal filter rejects all input field components that have no overlap with the fundamental mode of the filter and does not attenuate the fundamental mode. We define the quality of a nonideal modal filter Q(sub f) as the ratio of transmittance for the fundamental mode to the transmittance for an input field that has no overlap with the fundamental mode. We demonstrate the technique on a 20 cm long mid-infrared fiber that was produced by the U.S. Naval Research Laboratory. The filter quality Q(sub f) for this fiber at 10.5 micron wavelength is 1000 +/- 300. The absorption and scattering losses in the fundamental mode are approximately 8 dB/m. The total transmittance for the fundamental mode, including Fresnel reflections, is 0.428 +/- 0.002. The application of interest is the search for extrasolar Earthlike planets using nulling interferometry. It requires high rejection ratios to suppress the light of a bright star, so that the faint planet becomes visible. The use of modal filters increases the rejection ratio (or, equivalently, relaxes requirements on the wavefront quality) by reducing the sensitivity to small wavefront errors. We show theoretically that, exclusive of coupling losses, the use of a modal filter leads to the improvement of the rejection ratio in a two-beam interferometer by a factor of Q(sub f).

  14. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  15. Reward prediction error signal enhanced by striatum-amygdala interaction explains the acceleration of probabilistic reward learning by emotion.

    PubMed

    Watanabe, Noriya; Sakagami, Masamichi; Haruno, Masahiko

    2013-03-06

    Learning does not only depend on rationality, because real-life learning cannot be isolated from emotion or social factors. Therefore, it is intriguing to determine how emotion changes learning, and to identify which neural substrates underlie this interaction. Here, we show that the task-independent presentation of an emotional face before a reward-predicting cue increases the speed of cue-reward association learning in human subjects compared with trials in which a neutral face is presented. This phenomenon was attributable to an increase in the learning rate, which regulates reward prediction errors. Parallel to these behavioral findings, functional magnetic resonance imaging demonstrated that presentation of an emotional face enhanced reward prediction error (RPE) signal in the ventral striatum. In addition, we also found a functional link between this enhanced RPE signal and increased activity in the amygdala following presentation of an emotional face. Thus, this study revealed an acceleration of cue-reward association learning by emotion, and underscored a role of striatum-amygdala interactions in the modulation of the reward prediction errors by emotion.

  16. Patient safety education to change medical students' attitudes and sense of responsibility.

    PubMed

    Roh, Hyerin; Park, Seok Ju; Kim, Taekjoong

    2015-01-01

    This study examined changes in the perceptions and attitudes as well as the sense of individual and collective responsibility in medical students after they received patient safety education. A three-day patient safety curriculum was implemented for third-year medical students shortly before entering their clerkship. Before and after training, we administered a questionnaire, which was analysed quantitatively. Additionally, we asked students to answer questions about their expected behaviours in response to two case vignettes. Their answers were analysed qualitatively. There was improvement in students' concepts of patient safety after training. Before training, they showed good comprehension of the inevitability of error, but most students blamed individuals for errors and expressed a strong sense of individual responsibility. After training, students increasingly attributed errors to system dysfunction and reported more self-confidence in speaking up about colleagues' errors. However, due to the hierarchical culture, students still described difficulties communicating with senior doctors. Patient safety education effectively shifted students' attitudes towards systems-based thinking and increased their sense of collective responsibility. Strategies for improving superior-subordinate communication within a hierarchical culture should be added to the patient safety curriculum.

  17. Motivational state controls the prediction error in Pavlovian appetitive-aversive interactions.

    PubMed

    Laurent, Vincent; Balleine, Bernard W; Westbrook, R Frederick

    2018-01-01

    Contemporary theories of learning emphasize the role of a prediction error signal in driving learning, but the nature of this signal remains hotly debated. Here, we used Pavlovian conditioning in rats to investigate whether primary motivational and emotional states interact to control prediction error. We initially generated cues that positively or negatively predicted an appetitive food outcome. We then assessed how these cues modulated aversive conditioning when a novel cue was paired with a foot shock. We found that a positive predictor of food enhances, whereas a negative predictor of that same food impairs, aversive conditioning. Critically, we also showed that the enhancement produced by the positive predictor is removed by reducing the value of its associated food. In contrast, the impairment triggered by the negative predictor remains insensitive to devaluation of its associated food. These findings provide compelling evidence that the motivational value attributed to a predicted food outcome can directly control appetitive-aversive interactions and, therefore, that motivational processes can modulate emotional processes to generate the final error term on which subsequent learning is based. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Misconduct accounts for the majority of retracted scientific publications

    PubMed Central

    Fang, Ferric C.; Steen, R. Grant; Casadevall, Arturo

    2012-01-01

    A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes. PMID:23027971

  19. The advancement of the built environment research through employment of structural equation modeling (SEM)

    NASA Astrophysics Data System (ADS)

    Wasilah, S.; Fahmyddin, T.

    2018-03-01

    The employment of structural equation modeling (SEM) in research has taken an increasing attention in among researchers in built environment. There is a gap to understand the attributes, application, and importance of this approach in data analysis in built environment study. This paper intends to provide fundamental comprehension of SEM method in data analysis, unveiling attributes, employment and significance and bestow cases to assess associations amongst variables and constructs. The study uses some main literature to grasp the essence of SEM regarding with built environment research. The better acknowledgment of this analytical tool may assist the researcher in the built environment to analyze data under complex research questions and to test multivariate models in a single study.

  20. The effects of rectification and Global Positioning System errors on satellite image-based estimates of forest area

    Treesearch

    Ronald E. McRoberts

    2010-01-01

    Satellite image-based maps of forest attributes are of considerable interest and are used for multiple purposes such as international reporting by countries that have no national forest inventory and small area estimation for all countries. Construction of the maps typically entails, in part, rectifying the satellite images to a geographic coordinate system, observing...

  1. A comparison of FIA plot data derived from image pixels and image objects

    Treesearch

    Charles E. Werstak

    2012-01-01

    The use of Forest Inventory and Analysis (FIA) plot data for producing continuous and thematic maps of forest attributes (e.g., forest type, canopy cover, volume, and biomass) at the regional level from satellite imagery can be challenging due to differences in scale. Specifically, classification errors that may result from assumptions made between what the field data...

  2. Adjustments of individual-tree survival and diameter-growth equations to match whole-stand attributes

    Treesearch

    Quang V. Cao

    2010-01-01

    Individual-tree models are flexible and can perform well in predicting tree survival and diameter growth for a certain growing period. However, the resulting stand-level outputs often suffer from accumulation of errors and subsequently cannot compete with predictions from whole-stand models, especially when the projection period lengthens. Evaluated in this study were...

  3. Depth of Processing and Interference Effects in the Learning and Remembering of Sentences. Technical Report No. 21.

    ERIC Educational Resources Information Center

    Kane, Janet Hidde; Anderson, Richard C.

    In two experiments, college students who supplied the last words of sentences they read learned more than subjects who simply read whole sentences. This facilitation was observed even with a list of sentences which were almost always completed with the wrong words. However, proactive interference attributable to acquisition errors appeared on…

  4. A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables

    DTIC Science & Technology

    2006-03-01

    included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable

  5. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  6. LIDAR forest inventory with single-tree, double- and single-phase procedures

    Treesearch

    Robert C. Parker; David L. Evans

    2009-01-01

    Light Detection and Ranging (LIDAR) data at 0.5- to 2-m postings were used with doublesample, stratified inventory procedures involving single-tree attribute relationships in mixed, natural, and planted species stands to yield sampling errors (one-half the confidence interval expressed as a percentage of the mean) ranging from ±2.1 percent to ±11.5...

  7. Creating an Anti-Bullying Culture in Secondary Schools: Characteristics to Consider When Constructing Appropriate Anti-Bullying Programs

    ERIC Educational Resources Information Center

    Jones, Joseph R.; Augustine, Sharon Murphy

    2015-01-01

    Bullying in schools is a tremendous challenge that many secondary educators are attempting to address within their school environments. However, educators are often unsure of the attributes of an effective anti-bullying program; thus, they tend to create programs on a "trial and error" basis. This article provides an overview of the…

  8. Corrigendum to "Nearest neighbor imputation of species-level, plot-scale forest structure attributes from LiDAR data"

    Treesearch

    Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; David E. hall; Michael J. Falkowski

    2009-01-01

    The authors regret that an error was discovered in the code within the R software package, yaImpute (Crookston & Finley, 2008), which led to incorrect results reported in the above article. The Most Similar Neighbor (MSN) method computes the distance between reference observations and target observations in a projected space defined using canonical correlation...

  9. The Mote In Thy Brother's Eye, and The Beam in Thine Own: Predicting One's Own and Others' Personality Test Scores.

    ERIC Educational Resources Information Center

    Furnham, Adrian; Henderson, Monika

    1983-01-01

    Examined the similarity between subjects' (N=63) ratings of themselves and others, on various tests of personality. Results revealed that subjects correctly estimated several of their own scores, but only two scores of another person. They believed themselves to be similar to their friend, thereby showing attributional errors. (JAC)

  10. Individual differences in the calibration of trust in automation.

    PubMed

    Pop, Vlad L; Shrewsbury, Alex; Durso, Francis T

    2015-06-01

    The objective was to determine whether operators with an expectancy that automation is trustworthy are better at calibrating their trust to changes in the capabilities of automation, and if so, why. Studies suggest that individual differences in automation expectancy may be able to account for why changes in the capabilities of automation lead to a substantial change in trust for some, yet only a small change for others. In a baggage screening task, 225 participants searched for weapons in 200 X-ray images of luggage. Participants were assisted by an automated decision aid exhibiting different levels of reliability. Measures of expectancy that automation is trustworthy were used in conjunction with subjective measures of trust and perceived reliability to identify individual differences in trust calibration. Operators with high expectancy that automation is trustworthy were more sensitive to changes (both increases and decreases) in automation reliability. This difference was eliminated by manipulating the causal attribution of automation errors. Attributing the cause of automation errors to factors external to the automation fosters an understanding of tasks and situations in which automation differs in reliability and may lead to more appropriate trust. The development of interventions can lead to calibrated trust in automation. © 2014, Human Factors and Ergonomics Society.

  11. Vehicle detection from very-high-resolution (VHR) aerial imagery using attribute belief propagation (ABP)

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Li, Ying; Zhang, Li; Huang, Yuchun

    2016-10-01

    With the popularity of very-high-resolution (VHR) aerial imagery, the shape, color, and context attribute of vehicles are better characterized. Due to the various road surroundings and imaging conditions, vehicle attributes could be adversely affected so that vehicle is mistakenly detected or missed. This paper is motivated to robustly extract the rich attribute feature for detecting the vehicles of VHR imagery under different scenarios. Based on the hierarchical component tree of vehicle context, attribute belief propagation (ABP) is proposed to detect salient vehicles from the statistical perspective. With the Max-tree data structure, the multi-level component tree around the road network is efficiently created. The spatial relationship between vehicle and its belonging context is established with the belief definition of vehicle attribute. To effectively correct single-level belief error, the inter-level belief linkages enforce consistency of belief assignment between corresponding components at different levels. ABP starts from an initial set of vehicle belief calculated by vehicle attribute, and then iterates through each component by applying inter-level belief passing until convergence. The optimal value of vehicle belief of each component is obtained via minimizing its belief function iteratively. The proposed algorithm is tested on a diverse set of VHR imagery acquired in the city and inter-city areas of the West and South China. Experimental results show that the proposed algorithm can detect vehicle efficiently and suppress the erroneous effectively. The proposed ABP framework is promising to robustly classify the vehicles from VHR Aerial imagery.

  12. The Diagnostic Accuracy of Incisional Biopsy in the Oral Cavity.

    PubMed

    Chen, Sara; Forman, Michael; Sadow, Peter M; August, Meredith

    2016-05-01

    To determine the accuracy of incisional biopsy examination to diagnose oral lesions. This retrospective cohort study was performed to determine the concordance rate between incisional biopsy examination and definitive resection diagnosis for different oral lesions. The study sample was derived from the population of patients who presented to the Department of Oral and Maxillofacial Surgery, Massachusetts General Hospital (Boston, MA) from January 2005 through December 2012. Inclusion criteria were the diagnosis of an oral lesion from an incisional biopsy examination, subsequent diagnosis from the definitive resection of the same lesion, and complete clinical and pathologic patient records. The predictor variables were the origin and size of the lesion. The primary outcome variable was concordance between the provisional incisional biopsy diagnosis and definitive pathologic resection diagnosis. The secondary outcome variable was type of biopsy error for the discordant cases. Incisional biopsy errors were assessed and grouped into 5 categories: 1) sampling error; 2) insufficient tissue for diagnosis; 3) presence of inflammation making diagnosis difficult; 4) artifact; and 5) pathologist discordance. A total of 272 patients met the inclusion criteria. The study sample had a mean age of 47.4 years and 55.7% were women. Of these cases, 242 (88.9%) were concordant when comparing the biopsy and final resection pathology reports. At histologic evaluation, 60.0% of discordant findings were attributed to sampling error, 23.3% to pathologist discrepancy, 13.3% to insufficient tissue provided in the biopsy specimen, and 3.4% to inflammation obscuring diagnosis. Overall, concordant cases had a larger average biopsy volume (1.53 cm(3)) than discordant cases (0.42 cm(3)). The data collected indicate an 88.9% diagnostic concordance with final pathologic results for incisional oral biopsy diagnoses. Sixty percent of discordance was attributed to sampling error when sampled tissue was not representative of the lesion in toto. Multiple-site biopsy specimens and larger-volume samples allowed for a more accurate diagnosis. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  13. The cerebellum does more than sensory prediction error-based learning in sensorimotor adaptation tasks.

    PubMed

    Butcher, Peter A; Ivry, Richard B; Kuo, Sheng-Han; Rydz, David; Krakauer, John W; Taylor, Jordan A

    2017-09-01

    Individuals with damage to the cerebellum perform poorly in sensorimotor adaptation paradigms. This deficit has been attributed to impairment in sensory prediction error-based updating of an internal forward model, a form of implicit learning. These individuals can, however, successfully counter a perturbation when instructed with an explicit aiming strategy. This successful use of an instructed aiming strategy presents a paradox: In adaptation tasks, why do individuals with cerebellar damage not come up with an aiming solution on their own to compensate for their implicit learning deficit? To explore this question, we employed a variant of a visuomotor rotation task in which, before executing a movement on each trial, the participants verbally reported their intended aiming location. Compared with healthy control participants, participants with spinocerebellar ataxia displayed impairments in both implicit learning and aiming. This was observed when the visuomotor rotation was introduced abruptly ( experiment 1 ) or gradually ( experiment 2 ). This dual deficit does not appear to be related to the increased movement variance associated with ataxia: Healthy undergraduates showed little change in implicit learning or aiming when their movement feedback was artificially manipulated to produce similar levels of variability ( experiment 3 ). Taken together the results indicate that a consequence of cerebellar dysfunction is not only impaired sensory prediction error-based learning but also a difficulty in developing and/or maintaining an aiming solution in response to a visuomotor perturbation. We suggest that this dual deficit can be explained by the cerebellum forming part of a network that learns and maintains action-outcome associations across trials. NEW & NOTEWORTHY Individuals with cerebellar pathology are impaired in sensorimotor adaptation. This deficit has been attributed to an impairment in error-based learning, specifically, from a deficit in using sensory prediction errors to update an internal model. Here we show that these individuals also have difficulty in discovering an aiming solution to overcome their adaptation deficit, suggesting a new role for the cerebellum in sensorimotor adaptation tasks. Copyright © 2017 the American Physiological Society.

  14. Learning a locomotor task: with or without errors?

    PubMed

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them. Error strategies have a great potential to evoke higher muscle activation and provoke better motor learning of simple tasks. Neuroimaging evaluation of brain regions involved in learning can provide valuable information on observed behavioral outcomes related to learning processes. The impacts of these strategies on neurological patients need further investigations.

  15. Spherical harmonic representation of the main geomagnetic field for world charting and investigations of some fundamental problems of physics and geophysics

    NASA Technical Reports Server (NTRS)

    Barraclough, D. R.; Hide, R.; Leaton, B. R.; Lowes, F. J.; Malin, S. R. C.; Wilson, R. L. (Principal Investigator)

    1981-01-01

    Quiet-day data from MAGSAT were examined for effects which might test the validity of Maxwell's equations. Both external and toroidal fields which might represent a violation of the equations appear to exist, well within the associated errors. The external field might be associated with the ring current, and varies of a time-scale of one day or less. Its orientation is parallel to the geomagnetic dipole. The toriodal field can be confused with an orientation in error (in yaw). It the toroidal field really exists, its can be related to either ionospheric currents, or to toroidal fields in the Earth's core in accordance with Einstein's unified field theory, or to both.

  16. A formal theory of feature binding in object perception.

    PubMed

    Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T

    1996-01-01

    Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.

  17. Kitchen Physics: Lessons in Fluid Pressure and Error Analysis

    NASA Astrophysics Data System (ADS)

    Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano

    2017-02-01

    Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.

  18. Applying Metrological Techniques to Satellite Fundamental Climate Data Records

    NASA Astrophysics Data System (ADS)

    Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.

    2018-02-01

    Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.

  19. Quantum stopwatch: how to store time in a quantum memory.

    PubMed

    Yang, Yuxiang; Chiribella, Giulio; Hayashi, Masahito

    2018-05-01

    Quantum mechanics imposes a fundamental trade-off between the accuracy of time measurements and the size of the systems used as clocks. When the measurements of different time intervals are combined, the errors due to the finite clock size accumulate, resulting in an overall inaccuracy that grows with the complexity of the set-up. Here, we introduce a method that, in principle, eludes the accumulation of errors by coherently transferring information from a quantum clock to a quantum memory of the smallest possible size. Our method could be used to measure the total duration of a sequence of events with enhanced accuracy, and to reduce the amount of quantum communication needed to stabilize clocks in a quantum network.

  20. Digital data detection and synchronization

    NASA Technical Reports Server (NTRS)

    Noack, T. L.; Morris, J. F.

    1973-01-01

    The primary accomplishments have been in the analysis and simulation of receivers and bit synchronizers. It has been discovered that tracking rate effects play, a rather fundamental role in both receiver and synchronizer performance, but that data relating to recorder time-base-error, for the proper characterization of this phenomenon, is in rather short supply. It is possible to obtain operationally useful tape recorder time-base-error data from high signal-to-noise ratio tapes using synchronizers with relatively wideband tracking loops. Low signal-to-noise ratio tapes examined in the same way would not be synchronizable. Additional areas of interest covered are receiver false lock, cycle slipping, and other unusual phenomena, which have been described to some extent in this and earlier reports and simulated during the study.

  1. LiDAR-based TWI and terrain attributes in improving parametric predictor for tree growth in southeast Finland

    NASA Astrophysics Data System (ADS)

    Mohamedou, Cheikh; Tokola, Timo; Eerikäinen, Kalle

    2017-10-01

    The effect of soil moisture content on vegetation and therefore on growth is well known. Information about the growth of forest stands is key in forest planning and management, and is the concern of various stakeholders. One way to assess moisture content and its impacts on forest growth is to apply the Topographic Wetness Index (TWI) and the derived terrain attributes from the Digital Terrain Model (DTM). The TWI is an important terrain attribute, used in various ecological studies. In the current study, a total of 9987 tally trees within 197 sample plots in southeastern Finland and LiDAR (Light Detection and Ranging) -based TWI were selected to examine: 1) the effect of cell resolutions and focal statistics of neighborhood cells of DTM, on tree diameter increment, and 2) possibilities to improve the prediction accuracy of an existing single-tree growth model using the terrain attributes and TWI with the combined effects of three characteristics (i.e., cell resolutions, neighborhood cells and terrain attributes). The results suggest that the TWI with terrain attributes improved the growth estimation significantly, and within different site types the Root Mean Square Errors (RMSE) were lowered substantially. The best results were obtained for birch trees. The higher resolution of the DTM and the lower focal neighborhood cells were found to be the best alternative in computing the TWI.

  2. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  3. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  4. Pathogen Populations Evolve to Greater Race Complexity in Agricultural Systems – Evidence from Analysis of Rhynchosporium secalis Virulence Data

    PubMed Central

    Zhan, Jiasui; Yang, Lina; Zhu, Wen; Shang, Liping; Newton, Adrian C.

    2012-01-01

    Fitness cost associated with pathogens carrying unnecessary virulence alleles is the fundamental assumption for preventing the emergence of complex races in plant pathogen populations but this hypothesis has rarely been tested empirically on a temporal and spatial scale which is sufficient to distinguish evolutionary signals from experimental error. We analyzed virulence characteristics of ∼1000 isolates of the barley pathogen Rhynchosporium secalis collected from different parts of the United Kingdom between 1984 and 2005. We found a gradual increase in race complexity over time with a significant correlation between sampling date and race complexity of the pathogen (r20 = 0.71, p = 0.0002) and an average loss of 0.1 avirulence alleles (corresponding to an average gain of 0.1 virulence alleles) each year. We also found a positive and significant correlation between barley cultivar diversity and R. secalis virulence variation. The conditions assumed to favour complex races were not present in the United Kingdom and we hypothesize that the increase in race complexity is attributable to the combination of natural selection and genetic drift. Host resistance selects for corresponding virulence alleles to fixation or dominant frequency. Because of the weak fitness penalty of carrying the unnecessary virulence alleles, genetic drift associated with other evolutionary forces such as hitch-hiking maintains the frequency of the dominant virulence alleles even after the corresponding resistance factors cease to be used. PMID:22723870

  5. Training considerations for the intracoelomic implantation of electronic tags in fish with a summary of common surgical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooke, Steven J.; Wagner, Glenn N.; Brown, Richard S.

    2011-01-01

    Training is a fundamental part of all scientific and technical disciplines. This is particularly true for all types of surgeons. For surgical procedures, a number of skills are necessary to reduce mistakes. Trainees must learn an extensive yet standardized set of problem-solving and technical skills to handle challenges as they arise. There are currently no guidelines or consistent training methods for those intending to implant electronic tags in fish; this is surprising, considering documented cases of negative consequences of fish surgeries and information from studies having empirically tested fish surgical techniques. Learning how to do fish surgery once is insufficientmore » for ensuring the maintenance or improvement of surgical skill. Assessment of surgical skills is rarely incorporated into training, and is needed. Evaluation provides useful feedback that guides future learning, fosters habits of self-reflection and self-remediation, and promotes access to advanced training. Veterinary professionals should be involved in aspects of training to monitor basic surgical principles. We identified attributes related to knowledge, understanding, and skill that surgeons must demonstrate prior to performing fish surgery including a “hands-on” assessment using live fish. Included is a summary of common problems encountered by fish surgeons. We conclude by presenting core competencies that should be required as well as outlining a 3-day curriculum for training surgeons to conduct intracoelomic implantation of electronic tags. This curriculum could be offered through professional fisheries societies as professional development courses.« less

  6. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  7. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  8. Indicators of wildness: using attributes of the land to assess the context of wilderness

    Treesearch

    Gregory Aplet; Janice Thomson; Mark Wilbert

    2000-01-01

    Land can be described in a space defined by two fundamental qualities: naturalness and freedom. The axis of naturalness describes the wholeness of the ecosystem relative to a historical norm, while the axis of freedom describes the degree to which land remains outside of human control. Some land can be natural but not free, and vice versa, but the most natural and free...

  9. Whose drag is it anyway? Drag kings and monarchy in the UK.

    PubMed

    Willox, Annabelle

    2002-01-01

    This chapter will show that the term "drag" in drag queen has a different meaning, history and value to the term "drag" in drag king. By exposing this basic, yet fundamental, difference this paper will expose the problems inherent in the assumption of parity between the two forms of drag. An exposition of how camp has been used to comprehend and theorise drag queens will facilitating an understanding of the parasitic interrelationship between camp and drag queen performances, while a critique of "Towards a Butch-Femme Aesthetic," by Sue Ellen Case, will point out the problematic assumptions made about camp when attributed to a cultural location different to the drag queen. By interrogating the historical, cultural and theoretical similarities and differences between drag kings, butches, drag queens and femmes this paper will expose the flawed assumption that camp can be attributed to all of the above without proviso, and hence expose why drag has a fundamentally different contextual meaning for kings and queens. This chapter will conclude by examining the work of both Judith Halberstam and Biddy Martin and the practical examples of drag king and queen performances provided at the UK drag contest held at The Fridge in Brixton, London on 23 June 1999.

  10. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  11. Stakeholder analysis: theAndalusian Agency For Healthcare Quality case.

    PubMed

    Reyes-Alcázar, Víctor; Casas-Delgado, Marta; Herrera-Usagre, Manuel; Torres-Olivera, Antonio

    2012-01-01

    The aim of this study was to identify the different groups that can affect or be affected by an agency charged with the promoting and guaranteeing of health care quality in Andalusian region (Spain) and to provide a framework with the stakeholders included in different categories. The study adopted a cross-sectional research design. A case study with structured interviews among Andalusian Agency for Healthcare Quality Steering Committee members was carried out in 2010 to define stakeholders' categories and map the interest groups using 5 attributes: influence, importance, legitimacy, power, and urgency. After identification and categorization, stakeholders were weighted qualitatively according to the attributes of importance and influence using 4 possible levels. A matrix was made with the collected data relating both attributes. Furthermore, 8 different types of stakeholders were identified according to attributes power, legitimacy, and urgency. The study concludes that identifying and classifying stakeholders are fundamental to ensuring the success of an organization that must respond to needs and expectations, especially those of its clients. Moreover, knowing stakeholder linkages can contribute to increase organizational worth. This is essential for organizations basically directed to the provision of services in the scope of health care.

  12. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  13. Integrated and spectral energetics of the GLAS general circulation model

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J.

    1982-01-01

    Integrated and spectral error energetics of the GLAS General circulation model are compared with observations for periods in January 1975, 1976, and 1977. For two cases the model shows significant skill in predicting integrated energetics quantities out to two weeks, and for all three cases, the integrated monthly mean energetics show qualitative improvements over previous versions of the model in eddy kinetic energy and barotropic conversions. Fundamental difficulties remain with leakage of energy to the stratospheric level, particularly above strong initial jet streams associated in part with regions of steep terrain. The spectral error growth study represents the first comparison of general circulation model spectral energetics predictions with the corresponding observational spectra on a day by day basis. The major conclusion is that eddy kinetics energy can be correct while significant errors occur in the kinetic energy of wavenumber 3. Both the model and observations show evidence of single wavenumber dominance in eddy kinetic energy and the correlation of spectral kinetics and potential energy.

  14. Characteristics of advanced hydrogen maser frequency standards

    NASA Technical Reports Server (NTRS)

    Peters, H. E.

    1973-01-01

    Measurements with several operational atomic hydrogen maser standards have been made which illustrate the fundamental characteristics of the maser as well as the analysability of the corrections which are made to relate the oscillation frequency to the free, unperturbed, hydrogen standard transition frequency. Sources of the most important perturbations, and the magnitude of the associated errors, are discussed. A variable volume storage bulb hydrogen maser is also illustrated which can provide on the order of 2 parts in 10 to the 14th power or better accuracy in evaluating the wall shift. Since the other basic error sources combined contribute no more than approximately 1 part in 10 to the 14th power uncertainty, the variable volume storage bulb hydrogen maser will have net intrinsic accuracy capability of the order of 2 parts in 10 to the 14th power or better. This is an order of magnitude less error than anticipated with cesium standards and is comparable to the basic limit expected for a free atom hydrogen beam resonance standard.

  15. Identifying and Correcting Timing Errors at Seismic Stations in and around Iran

    DOE PAGES

    Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; ...

    2017-09-06

    A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14more » stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.« less

  16. An Approach for the Assessment of System Upset Resilience

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2013-01-01

    This report describes an approach for the assessment of upset resilience that is applicable to systems in general, including safety-critical, real-time systems. For this work, resilience is defined as the ability to preserve and restore service availability and integrity under stated conditions of configuration, functional inputs and environmental conditions. To enable a quantitative approach, we define novel system service degradation metrics and propose a new mathematical definition of resilience. These behavioral-level metrics are based on the fundamental service classification criteria of correctness, detectability, symmetry and persistence. This approach consists of a Monte-Carlo-based stimulus injection experiment, on a physical implementation or an error-propagation model of a system, to generate a system response set that can be characterized in terms of dimensional error metrics and integrated to form an overall measure of resilience. We expect this approach to be helpful in gaining insight into the error containment and repair capabilities of systems for a wide range of conditions.

  17. Energy-efficient quantum computing

    NASA Astrophysics Data System (ADS)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  18. A highly accurate ab initio potential energy surface for methane.

    PubMed

    Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-14

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  19. Delusions and prediction error: clarifying the roles of behavioural and brain responses

    PubMed Central

    Corlett, Philip Robert; Fletcher, Paul Charles

    2015-01-01

    Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since. PMID:25559871

  20. Proof of Heisenberg's error-disturbance relation.

    PubMed

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F

    2013-10-18

    While the slogan "no measurement without disturbance" has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.

Top