Sample records for classification error pace

  1. Assessing the statistical significance of the achieved classification error of classifiers constructed using serum peptide profiles, and a prescription for random sampling repeated studies for massive high-throughput genomic and proteomic studies.

    PubMed

    Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos

    2005-01-01

    Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  2. Electrocardiograms with pacemakers: accuracy of computer reading.

    PubMed

    Guglin, Maya E; Datwani, Neeta

    2007-04-01

    We analyzed the accuracy with which a computer algorithm reads electrocardiograms (ECGs) with electronic pacemakers (PMs). Electrocardiograms were screened for the presence of electronic pacing spikes. Computer-derived interpretations were compared with cardiologists' readings. Computer-drawn interpretations required revision by cardiologists in 61.3% of cases. In 18.4% of cases, the ECG reading algorithm failed to recognize the presence of a PM. The misinterpretation of paced beats as intrinsic beats led to multiple secondary errors, including myocardial infarctions in varying localization. The most common error in computer reading was the failure to identify an underlying rhythm. This error caused frequent misidentification of the PM type, especially when the presence of normal sinus rhythm was not recognized in a tracing with a DDD PM tracking the atrial activity. The increasing number of pacing devices, and the resulting number of ECGs with pacing spikes, mandates the refining of ECG reading algorithms. Improvement is especially needed in the recognition of the underlying rhythm, pacing spikes, and mode of pacing.

  3. The impact of weight classification on safety: timing steps to adapt to external constraints

    PubMed Central

    Gill, S.V.

    2015-01-01

    Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all ps<0.05). Examinations of participants’ ability to meet the metronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658

  4. 78 FR 49272 - Circulatory System Devices Panel of the Medical Devices Advisory Committee; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-13

    ... committee regarding classification of triple chamber pacing system analyzers (PSAs) with external pacing... chamber PSA is intended to be used during the implant procedure of pacemakers and defibrillators...

  5. A novel onset detection technique for brain-computer interfaces using sound-production related cognitive tasks in simulated-online system

    NASA Astrophysics Data System (ADS)

    Song, YoungJae; Sepulveda, Francisco

    2017-02-01

    Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.

  6. Hemispheric Asymmetries in the Activation and Monitoring of Memory Errors

    ERIC Educational Resources Information Center

    Giammattei, Jeannette; Arndt, Jason

    2012-01-01

    Previous research on the lateralization of memory errors suggests that the right hemisphere's tendency to produce more memory errors than the left hemisphere reflects hemispheric differences in semantic activation. However, all prior research that has examined the lateralization of memory errors has used self-paced recognition judgments. Because…

  7. Classification and Attention Training Curricula for Head Start Children.

    ERIC Educational Resources Information Center

    Earhart, Eileen M.

    The needs and capabilities of 4-year-old Head Start children were considered in development of classification and attention training curricula, including: (1) sensory exploration through object manipulation, (2) variety of high-interest materials, (3) change of pace during the lesson, (4) presentation of learning activities as games, (5) relating…

  8. Pace of shifts in climate regions increases with global temperature

    NASA Astrophysics Data System (ADS)

    Mahlstein, Irina; Daniel, John S.; Solomon, Susan

    2013-08-01

    Human-induced climate change causes significant changes in local climates, which in turn lead to changes in regional climate zones. Large shifts in the world distribution of Köppen-Geiger climate classifications by the end of this century have been projected. However, only a few studies have analysed the pace of these shifts in climate zones, and none has analysed whether the pace itself changes with increasing global mean temperature. In this study, pace refers to the rate at which climate zones change as a function of amount of global warming. Here we show that present climate projections suggest that the pace of shifting climate zones increases approximately linearly with increasing global temperature. Using the RCP8.5 emissions pathway, the pace nearly doubles by the end of this century and about 20% of all land area undergoes a change in its original climate. This implies that species will have increasingly less time to adapt to Köppen zone changes in the future, which is expected to increase the risk of extinction.

  9. Towards a system-paced near-infrared spectroscopy brain-computer interface: differentiating prefrontal activity due to mental arithmetic and mental singing from the no-control state.

    PubMed

    Power, Sarah D; Kushki, Azadeh; Chau, Tom

    2011-12-01

    Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI) for individuals with severe motor impairments. For the most part, previous research has investigated the development of NIRS-BCIs operating under synchronous control paradigms, which require the user to exert conscious control over their mental activity whenever the system is vigilant. Though functional, this is mentally demanding and an unnatural way to communicate. An attractive alternative to the synchronous control paradigm is system-paced control, in which users are required to consciously modify their brain activity only when they wish to affect the BCI output, and can remain in a more natural, 'no-control' state at all other times. In this study, we investigated the feasibility of a system-paced NIRS-BCI with one intentional control (IC) state corresponding to the performance of either mental arithmetic or mental singing. In particular, this involved determining if these tasks could be distinguished, individually, from the unconstrained 'no-control' state. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while eight able-bodied adults performed mental arithmetic and mental singing to answer multiple-choice questions within a system-paced paradigm. With a linear classifier trained on a six-dimensional feature set, an overall classification accuracy of 71.2% across participants was achieved for the mental arithmetic versus no-control classification problem. While the mental singing versus no-control classification was less successful across participants (62.7% on average), four participants did attain accuracies well in excess of chance, three of which were above 70%. Analyses were performed offline. Collectively, these results are encouraging, and demonstrate the potential of a system-paced NIRS-BCI with one IC state corresponding to either mental arithmetic or mental singing.

  10. Towards a system-paced near-infrared spectroscopy brain-computer interface: differentiating prefrontal activity due to mental arithmetic and mental singing from the no-control state

    NASA Astrophysics Data System (ADS)

    Power, Sarah D.; Kushki, Azadeh; Chau, Tom

    2011-10-01

    Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI) for individuals with severe motor impairments. For the most part, previous research has investigated the development of NIRS-BCIs operating under synchronous control paradigms, which require the user to exert conscious control over their mental activity whenever the system is vigilant. Though functional, this is mentally demanding and an unnatural way to communicate. An attractive alternative to the synchronous control paradigm is system-paced control, in which users are required to consciously modify their brain activity only when they wish to affect the BCI output, and can remain in a more natural, 'no-control' state at all other times. In this study, we investigated the feasibility of a system-paced NIRS-BCI with one intentional control (IC) state corresponding to the performance of either mental arithmetic or mental singing. In particular, this involved determining if these tasks could be distinguished, individually, from the unconstrained 'no-control' state. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while eight able-bodied adults performed mental arithmetic and mental singing to answer multiple-choice questions within a system-paced paradigm. With a linear classifier trained on a six-dimensional feature set, an overall classification accuracy of 71.2% across participants was achieved for the mental arithmetic versus no-control classification problem. While the mental singing versus no-control classification was less successful across participants (62.7% on average), four participants did attain accuracies well in excess of chance, three of which were above 70%. Analyses were performed offline. Collectively, these results are encouraging, and demonstrate the potential of a system-paced NIRS-BCI with one IC state corresponding to either mental arithmetic or mental singing.

  11. Development of a methodology for classifying software errors

    NASA Technical Reports Server (NTRS)

    Gerhart, S. L.

    1976-01-01

    A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.

  12. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  13. The corticospinal responses of metronome-paced, but not self-paced strength training are similar to motor skill training.

    PubMed

    Leung, Michael; Rantalainen, Timo; Teo, Wei-Peng; Kidgell, Dawson

    2017-12-01

    The corticospinal responses to skill training may be different to strength training, depending on how the strength training is performed. It was hypothesised that the corticospinal responses would not be different following skill training and metronome-paced strength training (MPST), but would differ when compared with self-paced strength training (SPST). Corticospinal excitability, short-interval intra-cortical inhibition (SICI) and strength and tracking error were measured at baseline and 2 and 4 weeks. Participants (n = 44) were randomly allocated to visuomotor tracking, MPST, SPST or a control group. MPST increased strength by 7 and 18%, whilst SPST increased strength by 12 and 26% following 2 and 4 weeks of strength training. There were no changes in strength following skill training. Skill training reduced tracking error by 47 and 58% at 2 and 4 weeks. There were no changes in tracking error following SPST; however, tracking error reduced by 24% following 4 weeks of MPST. Corticospinal excitability increased by 40% following MPST and by 29% following skill training. There was no change in corticospinal excitability following 4 weeks of SPST. Importantly, the magnitude of change between skill training and MPST was not different. SICI decreased by 41 and 61% following 2 and 4 weeks of MPST, whilst SICI decreased by 41 and 33% following 2 and 4 weeks of skill training. Again, SPST had no effect on SICI at 2 and 4 weeks. There was no difference in the magnitude of SICI reduction between skill training and MPST. This study adds new knowledge regarding the corticospinal responses to skill and MPST, showing they are similar but different when compared with SPST.

  14. Second Chance: If at First You Do Not Succeed, Set up a Plan and Try, Try Again

    ERIC Educational Resources Information Center

    Poulsen, John

    2012-01-01

    Student teachers make errors in their practicum. Then, they learn and fix those errors. This is the standard arc within a successful practicum. Some students make errors that they do not fix and then make more errors that again remain unfixed. This downward spiral increases in pace until the classroom becomes chaos. These students at the…

  15. Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.

    PubMed

    O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E

    2018-04-26

    Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.

  16. Noninvasive reconstruction of the three-dimensional ventricular activation sequence during pacing and ventricular tachycardia in the canine heart.

    PubMed

    Han, Chengzong; Pogwizd, Steven M; Killingsworth, Cheryl R; He, Bin

    2012-01-01

    Single-beat imaging of myocardial activation promises to aid in both cardiovascular research and clinical medicine. In the present study we validate a three-dimensional (3D) cardiac electrical imaging (3DCEI) technique with the aid of simultaneous 3D intracardiac mapping to assess its capability to localize endocardial and epicardial initiation sites and image global activation sequences during pacing and ventricular tachycardia (VT) in the canine heart. Body surface potentials were measured simultaneously with bipolar electrical recordings in a closed-chest condition in healthy canines. Computed tomography images were obtained after the mapping study to construct realistic geometry models. Data analysis was performed on paced rhythms and VTs induced by norepinephrine (NE). The noninvasively reconstructed activation sequence was in good agreement with the simultaneous measurements from 3D cardiac mapping with a correlation coefficient of 0.74 ± 0.06, a relative error of 0.29 ± 0.05, and a root mean square error of 9 ± 3 ms averaged over 460 paced beats and 96 ectopic beats including premature ventricular complexes, couplets, and nonsustained monomorphic VTs and polymorphic VTs. Endocardial and epicardial origins of paced beats were successfully predicted in 72% and 86% of cases, respectively, during left ventricular pacing. The NE-induced ectopic beats initiated in the subendocardium by a focal mechanism. Sites of initial activation were estimated to be ∼7 mm from the measured initiation sites for both the paced beats and ectopic beats. For the polymorphic VTs, beat-to-beat dynamic shifts of initiation site and activation pattern were characterized by the reconstruction. The present results suggest that 3DCEI can noninvasively image the 3D activation sequence and localize the origin of activation of paced beats and NE-induced VTs in the canine heart with good accuracy. This 3DCEI technique offers the potential to aid interventional therapeutic procedures for treating ventricular arrhythmias arising from epicardial or endocardial sites and to noninvasively assess the mechanisms of these arrhythmias.

  17. Noninvasive reconstruction of the three-dimensional ventricular activation sequence during pacing and ventricular tachycardia in the canine heart

    PubMed Central

    Han, Chengzong; Pogwizd, Steven M.; Killingsworth, Cheryl R.

    2012-01-01

    Single-beat imaging of myocardial activation promises to aid in both cardiovascular research and clinical medicine. In the present study we validate a three-dimensional (3D) cardiac electrical imaging (3DCEI) technique with the aid of simultaneous 3D intracardiac mapping to assess its capability to localize endocardial and epicardial initiation sites and image global activation sequences during pacing and ventricular tachycardia (VT) in the canine heart. Body surface potentials were measured simultaneously with bipolar electrical recordings in a closed-chest condition in healthy canines. Computed tomography images were obtained after the mapping study to construct realistic geometry models. Data analysis was performed on paced rhythms and VTs induced by norepinephrine (NE). The noninvasively reconstructed activation sequence was in good agreement with the simultaneous measurements from 3D cardiac mapping with a correlation coefficient of 0.74 ± 0.06, a relative error of 0.29 ± 0.05, and a root mean square error of 9 ± 3 ms averaged over 460 paced beats and 96 ectopic beats including premature ventricular complexes, couplets, and nonsustained monomorphic VTs and polymorphic VTs. Endocardial and epicardial origins of paced beats were successfully predicted in 72% and 86% of cases, respectively, during left ventricular pacing. The NE-induced ectopic beats initiated in the subendocardium by a focal mechanism. Sites of initial activation were estimated to be ∼7 mm from the measured initiation sites for both the paced beats and ectopic beats. For the polymorphic VTs, beat-to-beat dynamic shifts of initiation site and activation pattern were characterized by the reconstruction. The present results suggest that 3DCEI can noninvasively image the 3D activation sequence and localize the origin of activation of paced beats and NE-induced VTs in the canine heart with good accuracy. This 3DCEI technique offers the potential to aid interventional therapeutic procedures for treating ventricular arrhythmias arising from epicardial or endocardial sites and to noninvasively assess the mechanisms of these arrhythmias. PMID:21984548

  18. The impact of breathing rate on the cardiac autonomic dynamics among children with cerebral palsy compared to typically developed controls.

    PubMed

    Amichai, Taly; Eylon, Sharon; Berger, Itai; Katz-Leurer, Michal

    2018-02-06

    To describe the immediate effect of breathing rate on heart rate (HR) and heart rate variability (HRV) in children with cerebral palsy (CP) and a control group of typically developed (TD) age and gender-matched children. Twenty children with CP at gross motor function classification system levels I-III and 20 TD children aged 6-11 participated in the study. HR was monitored at rest and during paced breathing with biofeedback. Respiratory measures were assessed by KoKo spirometry. Children with CP have lower spirometry and HRV values at rest compared to TD children. The mean reduction of breathing rate during paced breathing among children with CP was significantly smaller. Nonetheless, while practicing paced breathing, both groups reduced their breathing rate and increased their HRV. The results of the current work present the immediate effect of paced breathing on HRV parameters in CP and TD children. Further studies are needed to investigate the effect of long-term treatment focusing on paced breathing for children with CP.

  19. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  20. An Assessment of ECMWF Analyses and Model Forecasts over the North Slope of Alaska Using Observations from the ARM Mixed-Phase Arctic Cloud Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shaocheng; Klein, Stephen A.; Yio, J. John

    2006-03-11

    European Centre for Medium-Range Weather Forecasts (ECMWF) analysis and model forecast data are evaluated using observations collected during the Atmospheric Radiation Measurement (ARM) October 2004 Mixed-Phase Arctic Cloud Experiment (M-PACE) at its North Slope of Alaska (NSA) site. It is shown that the ECMWF analysis reasonably represents the dynamic and thermodynamic structures of the large-scale systems that affected the NSA during M-PACE. The model-analyzed near-surface horizontal winds, temperature, and relative humidity also agree well with the M-PACE surface measurements. Given the well-represented large-scale fields, the model shows overall good skill in predicting various cloud types observed during M-PACE; however, themore » physical properties of single-layer boundary layer clouds are in substantial error. At these times, the model substantially underestimates the liquid water path in these clouds, with the concomitant result that the model largely underpredicts the downwelling longwave radiation at the surface and overpredicts the outgoing longwave radiation at the top of the atmosphere. The model also overestimates the net surface shortwave radiation, mainly because of the underestimation of the surface albedo. The problem in the surface albedo is primarily associated with errors in the surface snow prediction. Principally because of the underestimation of the surface downwelling longwave radiation at the times of single-layer boundary layer clouds, the model shows a much larger energy loss (-20.9 W m-2) than the observation (-9.6 W m-2) at the surface during the M-PACE period.« less

  1. Pacing threshold changes after transvenous catheter countershock.

    PubMed

    Yee, R; Jones, D L; Klein, G J

    1984-02-01

    The serial changes in pacing threshold and R-wave amplitude were examined after insertion of a countershock catheter in 12 patients referred for management of recurrent ventricular tachyarrhythmias. In 6 patients, values before and immediately after catheter countershock were monitored. Pacing threshold increased (from 1.4 +/- 0.2 to 2.4 +/- 0.5 V, mean +/- standard error of the mean, p less than 0.05) while the R-wave amplitude decreased (bipolar R wave from 5.9 +/- 1.1 to 3.4 +/- 0.7 mV, p less than 0.01; unipolar R wave recorded from the distal ventricular electrode from 8.9 +/- 1.8 to 4.6 +/- 1.2 mV, p less than 0.01; and proximal ventricular electrode from 7.7 +/- 1.5 to 5.0 +/- 1.0 mV, p less than 0.01). A return to control values occurred within 10 minutes. In all patients, pacing threshold increased by 154 +/- 30% (p less than 0.001) during the first 7 days that the catheter was in place. It is concluded that catheter countershock causes an acute increase in pacing threshold and decrease in R-wave amplitude. A catheter used for countershock may not be acceptable as a backup pacing catheter.

  2. Classification-Based Spatial Error Concealment for Visual Communications

    NASA Astrophysics Data System (ADS)

    Chen, Meng; Zheng, Yefeng; Wu, Min

    2006-12-01

    In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.

  3. Evaluating performance of stormwater sampling approaches using a dynamic watershed model.

    PubMed

    Ackerman, Drew; Stein, Eric D; Ritter, Kerry J

    2011-09-01

    Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.

  4. A Brief Review of Biodata History, Research, and Applications

    DTIC Science & Technology

    2007-01-01

    scrutiny, via interpretation of the Uniform Guidelines on Employee Selection Procedures ( EEOC , 1978). Pace and Schoenfeldt (1977) point out that...classification test batteries. American Psychologist, 5, 279. Ledvinka, J., & Scarpello, V. G. (1992). Federal regulation of personnel and human resource

  5. C-fuzzy variable-branch decision tree with storage and classification error rate constraints

    NASA Astrophysics Data System (ADS)

    Yang, Shiueng-Bien

    2009-10-01

    The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.

  6. Study of Laser Created Metal Vapour Plasmas.

    DTIC Science & Technology

    1981-09-01

    ance saturation could lead to extensive ground Zcvei burnout of certain kinds of atoms or ions and that this could lead to the creation of a ground...level FORM DD I JAN ", 1473 UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PACE ’l hen Dota Fnt ’UNCLASSIFIFD SS ~eUItTY CLASSIFICATION OF THIS PAqE(W"Sef...vapours. Preliminary calculations have suggested that laser resonance saturation could lead to extensive ground level burnout of certain kinds of

  7. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  8. An algorithm for verifying biventricular capture based on evoked-response morphology.

    PubMed

    Diotallevi, Paolo; Ravazzi, Pier Antonio; Gostoli, Enrico; De Marchi, Giuseppe; Militello, Carmelo; Kraetschmer, Hannes

    2005-01-01

    Cardiac resynchronization therapy relies on consistent beat-by-beat myocardial capture in both ventricles. A pacemaker ensuring right (RV) and left ventricular (LV) capture through reliable capture verification and automatic output adjustment would contribute to patients' safety and quality of life. We studied the feasibility of an algorithm based on evoked-response (ER) morphology for capture verification in both the ventricles. RV and LV ER signals were recorded in 20 patients (mean age 72.5 years, range 64.3-80.4 years, 4 females and 16 males) during implantation of biventricular (BiV) pacing systems. Leads of several manufacturers were tested. Pacing and intracardiac electrogram (IEGM) recording were performed using an external pulse generator. IEGM and surface-lead electrocardiogram (ECG) signals were recorded under different pacing conditions for 10 seconds each: RV pacing only, LV pacing only, and BiV pacing with several interventricular delays. Based on morphology characteristics, ERs were classified manually for capture and failure to capture, and the validity of the classification was assessed by reference to the ECG. A total of 3,401 LV- and 3,345 RV-paced events were examined. In the RV and LV, the sensitivities of the algorithm were 95.6% and 96.1% in the RV and LV, respectively, and the corresponding specificities were 91.4% and 95.2%, respectively. The lower sensitivity in the RV was attributed to signal blanking in both channels during BiV pacing with a nonzero interventricular delay. The analysis revealed that the algorithm for identifying capture and failure to capture based on the ER-signal morphology was safe and effective in each ventricle with all leads tested in the study.

  9. Clarification of terminology in medication errors: definitions and classification.

    PubMed

    Ferner, Robin E; Aronson, Jeffrey K

    2006-01-01

    We have previously described and analysed some terms that are used in drug safety and have proposed definitions. Here we discuss and define terms that are used in the field of medication errors, particularly terms that are sometimes misunderstood or misused. We also discuss the classification of medication errors. A medication error is a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient. Errors can be classified according to whether they are mistakes, slips, or lapses. Mistakes are errors in the planning of an action. They can be knowledge based or rule based. Slips and lapses are errors in carrying out an action - a slip through an erroneous performance and a lapse through an erroneous memory. Classification of medication errors is important because the probabilities of errors of different classes are different, as are the potential remedies.

  10. A Design-Aid and Cost Estimate Model for Suppressive Shielding Structures

    DTIC Science & Technology

    1975-12-01

    Final Report APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIM4ITED Prepared for SAFETY ENGINEERING GRADUATE PROGRAM AND TEXAS A&1-., UNIVERSITY USAVC...Coinryuter Science Department, Texas A&1.! ’ University . 19. KEY dVOIRS (Continue an rcverse siJ~e it necessary mind lde.ifily by block lumber...1473 EDITION OF T NOV 65 IS 005OLETE SE.CURITY CLASSIFICATION OF TkIS PACE (Whtin Pa&121. ~t. SECUnITY CLASSIFICATION OF THIS PAGE(W4uio DOl’ Enteted

  11. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  12. Spatial resolution of pace mapping of idiopathic ventricular tachycardia/ectopy originating in the right ventricular outflow tract.

    PubMed

    Bogun, Frank; Taj, Majid; Ting, Michael; Kim, Hyungjin Myra; Reich, Stephen; Good, Eric; Jongnarangsin, Krit; Chugh, Aman; Pelosi, Frank; Oral, Hakan; Morady, Fred

    2008-03-01

    Pace mapping has been used to identify the site of origin of focal ventricular arrhythmias. The spatial resolution of pace mapping has not been adequately quantified using currently available three-dimensional mapping systems. The purpose of this study was to determine the spatial resolution of pace mapping in patients with idiopathic ventricular tachycardia or premature ventricular contractions originating in the right ventricular outflow tract. In 16 patients with idiopathic ventricular tachycardia/ectopy from the right ventricular outflow tract, comparisons and classifications of pace maps were performed by two observers (good pace map: match >10/12 leads; inadequate pace map: match < or =10/12 leads) and a customized MATLAB 6.0 program (assessing correlation coefficient and normalized root mean square of the difference (nRMSd) between test and template signals). With an electroanatomic mapping system, the correlation coefficient of each pace map was correlated with the distance between the pacing site and the effective ablation site. The endocardial area within the 10-ms activation isochrone was measured. The ablation procedure was effective in all patients. Sites with good pace maps had a higher correlation coefficient and lower nRMSd than sites with inadequate pace maps (correlation coefficient: 0.96 +/- 0.03 vs 0.76 +/- 0.18, P <.0001; nRMSd: 0.41 +/- 0.16 vs 0.89 +/- 0.39, P <.0001). Using receiver operating characteristic curves, appropriate cutoff values were >0.94 for correlation coefficient (sensitivity 81%, specificity 89%) and < or =0.54 for nRMSd (sensitivity 76%, specificity 80%). Good pace maps were located a mean of 7.3 +/- 5.0 mm from the effective ablation site and had a mean activation time of -24 +/- 7 ms. However, in 3 (18%) of 16 patients, the best pace map was inadequate at the effective ablation site, with an endocardial activation time at these sites of -25 +/- 12 ms. Pace maps with correlation coefficient > or =0.94 were confined to an area of 1.8 +/- 0.6 cm2. The 10-ms isochrone measured 1.2 +/- 0.7 cm2. The spatial resolution of a good pace map for targeting ventricular tachycardia/ectopy is 1.8 cm2 in the right ventricular outflow tract and therefore is inferior to the spatial resolution of activation mapping as assessed by isochronal activation. In approximately 20% of patients, pace mapping is unreliable in identifying the site of origin, possibly due a deeper site of origin and preferential conduction via fibers connecting the focus to the endocardial surface.

  13. 21 CFR 870.3720 - Pacemaker electrode function tester.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Pacemaker electrode function tester. 870.3720... electrode function tester. (a) Identification. A pacemaker electrode function tester is a device which is... measuring the patient's pacing threshold and intracardiac R-wave potential. (b) Classification. Class II...

  14. Positive Pacing Strategies Are Utilized by Elite Male and Female Para-cyclists in Short Time Trials in the Velodrome.

    PubMed

    Wright, Rachel L

    2015-01-01

    In para-cycling, competitors are classed based on functional impairment resulting in cyclists with neurological and locomotor impairments competing against each other. In Paralympic competition, classes are combined by using a factoring adjustment to race times to produce the overall medallists. Pacing in short-duration track cycling events is proposed to utilize an "all-out" strategy in able-bodied competition. However, pacing in para-cycling may vary depending on the level of impairment. Analysis of the pacing strategies employed by different classification groups may offer scope for optimal performance; therefore, this study investigated the pacing strategy adopted during the 1-km time trial (TT) and 500-m TT in elite C1 to C3 para-cyclists and able-bodied cyclists. Total times and intermediate split times (125-m intervals; measured to 0.001 s) were obtained from the C1-C3 men's 1-km TT (n = 28) and women's 500-m TT (n = 9) from the 2012 Paralympic Games and the men's 1-km TT (n = 19) and women's 500-m TT (n = 12) from the 2013 UCI World Track Championships from publically available video. Split times were expressed as actual time, factored time (for the para-cyclists) and as a percentage of total time. A two-way analysis of variance was used to investigate differences in split times between the different classifications and the able-bodied cyclists in the men's 1-km TT and between the para-cyclists and able-bodied cyclists in the women's 500-m TT. The importance of position at the first split was investigated with Kendall's Tau-b correlation. The first 125-m split time was the slowest for all cyclists, representing the acceleration phase from a standing start. C2 cyclists were slowest at this 125-m split, probably due to a combination of remaining seated in this acceleration phase and a high proportion of cyclists in this group being trans-femoral amputees. Not all cyclists used aero-bars, preferring to use drop, flat or bullhorn handlebars. Split times increased in the later stages of the race, demonstrating a positive pacing strategy. In the shorter women's 500-m TT, rank at the first split was more strongly correlated with final position than in the longer men's 1-km TT. In conclusion, a positive pacing strategy was adopted by the different para-cycling classes.

  15. Positive Pacing Strategies Are Utilized by Elite Male and Female Para-cyclists in Short Time Trials in the Velodrome

    PubMed Central

    Wright, Rachel L.

    2016-01-01

    In para-cycling, competitors are classed based on functional impairment resulting in cyclists with neurological and locomotor impairments competing against each other. In Paralympic competition, classes are combined by using a factoring adjustment to race times to produce the overall medallists. Pacing in short-duration track cycling events is proposed to utilize an “all-out” strategy in able-bodied competition. However, pacing in para-cycling may vary depending on the level of impairment. Analysis of the pacing strategies employed by different classification groups may offer scope for optimal performance; therefore, this study investigated the pacing strategy adopted during the 1-km time trial (TT) and 500-m TT in elite C1 to C3 para-cyclists and able-bodied cyclists. Total times and intermediate split times (125-m intervals; measured to 0.001 s) were obtained from the C1-C3 men's 1-km TT (n = 28) and women's 500-m TT (n = 9) from the 2012 Paralympic Games and the men's 1-km TT (n = 19) and women's 500-m TT (n = 12) from the 2013 UCI World Track Championships from publically available video. Split times were expressed as actual time, factored time (for the para-cyclists) and as a percentage of total time. A two-way analysis of variance was used to investigate differences in split times between the different classifications and the able-bodied cyclists in the men's 1-km TT and between the para-cyclists and able-bodied cyclists in the women's 500-m TT. The importance of position at the first split was investigated with Kendall's Tau-b correlation. The first 125-m split time was the slowest for all cyclists, representing the acceleration phase from a standing start. C2 cyclists were slowest at this 125-m split, probably due to a combination of remaining seated in this acceleration phase and a high proportion of cyclists in this group being trans-femoral amputees. Not all cyclists used aero-bars, preferring to use drop, flat or bullhorn handlebars. Split times increased in the later stages of the race, demonstrating a positive pacing strategy. In the shorter women's 500-m TT, rank at the first split was more strongly correlated with final position than in the longer men's 1-km TT. In conclusion, a positive pacing strategy was adopted by the different para-cycling classes. PMID:26834643

  16. Exploring the Cause of English Pronoun Gender Errors by Chinese Learners of English: Evidence from the Self-paced Reading Paradigm.

    PubMed

    Dong, Yanping; Wen, Yun; Zeng, Xiaomeng; Ji, Yifei

    2015-12-01

    To locate the underlying cause of biological gender errors of oral English pronouns by proficient Chinese-English learners, two self-paced reading experiments were conducted to explore whether the reading time for each 'he' or 'she' that matched its antecedent was shorter than that in the corresponding mismatch situation, as with native speakers of English. The critical manipulation was to see whether highlighting the gender information of an antecedent with a human picture would make a difference. The results indicate that such manipulation did make a difference. Since oral Chinese does not distinguish 'he' and 'she', the findings suggest that Chinese speakers probably do not usually process biological gender for linguistic purposes and the mixed use of 'he' and 'she' is probably a result of deficient processing of gender information in the conceptualizer. Theoretical and pedagogical implications are discussed.

  17. Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay.

    PubMed

    Smith, Lauren H; Hargrove, Levi J; Lock, Blair A; Kuiken, Todd A

    2011-04-01

    Pattern recognition-based control of myoelectric prostheses has shown great promise in research environments, but has not been optimized for use in a clinical setting. To explore the relationship between classification error, controller delay, and real-time controllability, 13 able-bodied subjects were trained to operate a virtual upper-limb prosthesis using pattern recognition of electromyogram (EMG) signals. Classification error and controller delay were varied by training different classifiers with a variety of analysis window lengths ranging from 50 to 550 ms and either two or four EMG input channels. Offline analysis showed that classification error decreased with longer window lengths (p < 0.01 ). Real-time controllability was evaluated with the target achievement control (TAC) test, which prompted users to maneuver the virtual prosthesis into various target postures. The results indicated that user performance improved with lower classification error (p < 0.01 ) and was reduced with longer controller delay (p < 0.01 ), as determined by the window length. Therefore, both of these effects should be considered when choosing a window length; it may be beneficial to increase the window length if this results in a reduced classification error, despite the corresponding increase in controller delay. For the system employed in this study, the optimal window length was found to be between 150 and 250 ms, which is within acceptable controller delays for conventional multistate amplitude controllers.

  18. The use of a contextual, modal and psychological classification of medication errors in the emergency department: a retrospective descriptive study.

    PubMed

    Cabilan, C J; Hughes, James A; Shannon, Carl

    2017-12-01

    To describe the contextual, modal and psychological classification of medication errors in the emergency department to know the factors associated with the reported medication errors. The causes of medication errors are unique in every clinical setting; hence, error minimisation strategies are not always effective. For this reason, it is fundamental to understand the causes specific to the emergency department so that targeted strategies can be implemented. Retrospective analysis of reported medication errors in the emergency department. All voluntarily staff-reported medication-related incidents from 2010-2015 from the hospital's electronic incident management system were retrieved for analysis. Contextual classification involved the time, place and the type of medications involved. Modal classification pertained to the stage and issue (e.g. wrong medication, wrong patient). Psychological classification categorised the errors in planning (knowledge-based and rule-based errors) and skill (slips and lapses). There were 405 errors reported. Most errors occurred in the acute care area, short-stay unit and resuscitation area, during the busiest shifts (0800-1559, 1600-2259). Half of the errors involved high-alert medications. Many of the errors occurred during administration (62·7%), prescribing (28·6%) and commonly during both stages (18·5%). Wrong dose, wrong medication and omission were the issues that dominated. Knowledge-based errors characterised the errors that occurred in prescribing and administration. The highest proportion of slips (79·5%) and lapses (76·1%) occurred during medication administration. It is likely that some of the errors occurred due to the lack of adherence to safety protocols. Technology such as computerised prescribing, barcode medication administration and reminder systems could potentially decrease the medication errors in the emergency department. There was a possibility that some of the errors could be prevented if safety protocols were adhered to, which highlights the need to also address clinicians' attitudes towards safety. Technology can be implemented to help minimise errors in the ED, but this must be coupled with efforts to enhance the culture of safety. © 2017 John Wiley & Sons Ltd.

  19. Effects of uncertainty and variability on population declines and IUCN Red List classifications.

    PubMed

    Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M

    2018-01-22

    The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.

  20. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  1. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  2. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  3. Factors That Affect Large Subunit Ribosomal DNA Amplicon Sequencing Studies of Fungal Communities: Classification Method, Primer Choice, and Error

    PubMed Central

    Porter, Teresita M.; Golding, G. Brian

    2012-01-01

    Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1) a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN); 2) a composition-based method (Ribosomal Database Project naïve Bayesian classifier, NBC); and, 3) a phylogeny-based method (Statistical Assignment Package, SAP). We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50–100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys. PMID:22558215

  4. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  5. The Role and Function of the Navy Office of Legislative Affairs in the Congressional Program Authorization and Budget Process

    DTIC Science & Technology

    1990-12-01

    SECURITY CLASSIfICATON AUTHORITY I OSTRilJ-ON AVALAIILiTY OF REPORT Approved for public release; 20 DECLASSIFIA71ON/OOWNGRADiNG SCxIEDULI distribution is...8217; administrative budget formulation at the headquarters level throu oh connressional action in budget enactment, (2) the role and mission of the Office of the Navy...are nobsolete S .URITYY CLASSIFICATION O0 ’HIS PACE S/N 0102-LF-014-6603 Unclassified Approved for public release; distribution is unlimited. The Role

  6. Modeling parameters that characterize pacing of elite female 800-m freestyle swimmers.

    PubMed

    Lipińska, Patrycja; Allen, Sian V; Hopkins, Will G

    2016-01-01

    Pacing offers a potential avenue for enhancement of endurance performance. We report here a novel method for characterizing pacing in 800-m freestyle swimming. Websites provided 50-m lap and race times for 192 swims of 20 elite female swimmers between 2000 and 2013. Pacing for each swim was characterized with five parameters derived from a linear model: linear and quadratic coefficients for effect of lap number, reductions from predicted time for first and last laps, and lap-time variability (standard error of the estimate). Race-to-race consistency of the parameters was expressed as intraclass correlation coefficients (ICCs). The average swim was a shallow negative quadratic with slowest time in the eleventh lap. First and last laps were faster by 6.4% and 3.6%, and lap-time variability was ±0.64%. Consistency between swimmers ranged from low-moderate for the linear and quadratic parameters (ICC = 0.29 and 0.36) to high for the last-lap parameter (ICC = 0.62), while consistency for race time was very high (ICC = 0.80). Only ~15% of swimmers had enough swims (~15 or more) to provide reasonable evidence of optimum parameter values in plots of race time vs. each parameter. The modest consistency of most of the pacing parameters and lack of relationships between parameters and performance suggest that swimmers usually compensated for changes in one parameter with changes in another. In conclusion, pacing in 800-m elite female swimmers can be characterized with five parameters, but identifying an optimal pacing profile is generally impractical.

  7. On the use of interaction error potentials for adaptive brain computer interfaces.

    PubMed

    Llera, A; van Gerven, M A J; Gómez, V; Jensen, O; Kappen, H J

    2011-12-01

    We propose an adaptive classification method for the Brain Computer Interfaces (BCI) which uses Interaction Error Potentials (IErrPs) as a reinforcement signal and adapts the classifier parameters when an error is detected. We analyze the quality of the proposed approach in relation to the misclassification of the IErrPs. In addition we compare static versus adaptive classification performance using artificial and MEG data. We show that the proposed adaptive framework significantly improves the static classification methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  9. A neural network for noise correlation classification

    NASA Astrophysics Data System (ADS)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  10. Effects of aging on control of timing and force of finger tapping.

    PubMed

    Sasaki, Hirokazu; Masumoto, Junya; Inui, Nobuyuki

    2011-04-01

    The present study examined whether the elderly produced a hastened or delayed tap with a negative or positive constant intertap interval error more frequently in self-paced tapping than in the stimulus-synchronized tapping for the 2 N target force at 2 or 4 Hz frequency. The analysis showed that, at both frequencies, the percentage of the delayed tap was larger in the self-paced tapping than in the stimulus-synchronized tapping, whereas the hastened tap showed the opposite result. At the 4 Hz frequency, all age groups had more variable intertap intervals during the self-paced tapping than during the stimulus-synchronized tapping, and the variability of the intertap intervals increased with age. Thus, although the increase in the frequency of delayed taps and variable intertap intervals in the self-paced tapping perhaps resulted from a dysfunction of movement timing in the basal ganglia with age, the decline in timing accuracy was somewhat improved by an auditory cue. The force variability of tapping at 4 Hz further increased with age, indicating an effect of aging on the control of force.

  11. Self-Paced Prioritized Curriculum Learning With Coverage Penalty in Deep Reinforcement Learning.

    PubMed

    Ren, Zhipeng; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Zhipeng Ren; Daoyi Dong; Huaxiong Li; Chunlin Chen; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Ren, Zhipeng

    2018-06-01

    In this paper, a new training paradigm is proposed for deep reinforcement learning using self-paced prioritized curriculum learning with coverage penalty. The proposed deep curriculum reinforcement learning (DCRL) takes the most advantage of experience replay by adaptively selecting appropriate transitions from replay memory based on the complexity of each transition. The criteria of complexity in DCRL consist of self-paced priority as well as coverage penalty. The self-paced priority reflects the relationship between the temporal-difference error and the difficulty of the current curriculum for sample efficiency. The coverage penalty is taken into account for sample diversity. With comparison to deep Q network (DQN) and prioritized experience replay (PER) methods, the DCRL algorithm is evaluated on Atari 2600 games, and the experimental results show that DCRL outperforms DQN and PER on most of these games. More results further show that the proposed curriculum training paradigm of DCRL is also applicable and effective for other memory-based deep reinforcement learning approaches, such as double DQN and dueling network. All the experimental results demonstrate that DCRL can achieve improved training efficiency and robustness for deep reinforcement learning.

  12. Model-based imaging of cardiac electrical function in human atria

    NASA Astrophysics Data System (ADS)

    Modre, Robert; Tilg, Bernhard; Fischer, Gerald; Hanser, Friedrich; Messnarz, Bernd; Schocke, Michael F. H.; Kremser, Christian; Hintringer, Florian; Roithinger, Franz

    2003-05-01

    Noninvasive imaging of electrical function in the human atria is attained by the combination of data from electrocardiographic (ECG) mapping and magnetic resonance imaging (MRI). An anatomical computer model of the individual patient is the basis for our computer-aided diagnosis of cardiac arrhythmias. Three patients suffering from Wolff-Parkinson-White syndrome, from paroxymal atrial fibrillation, and from atrial flutter underwent an electrophysiological study. After successful treatment of the cardiac arrhythmia with invasive catheter technique, pacing protocols with stimuli at several anatomical sites (coronary sinus, left and right pulmonary vein, posterior site of the right atrium, right atrial appendage) were performed. Reconstructed activation time (AT) maps were validated with catheter-based electroanatomical data, with invasively determined pacing sites, and with pacing at anatomical markers. The individual complex anatomical model of the atria of each patient in combination with a high-quality mesh optimization enables accurate AT imaging, resulting in a localization error for the estimated pacing sites within 1 cm. Our findings may have implications for imaging of atrial activity in patients with focal arrhythmias.

  13. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG.

    PubMed

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark

    2007-12-01

    To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.

  15. Noninvasive reconstruction of the three-dimensional ventricular activation sequence during pacing and ventricular tachycardia in the rabbit heart.

    PubMed

    Han, Chengzong; Pogwizd, Steven M; Killingsworth, Cheryl R; He, Bin

    2011-01-01

    Ventricular arrhythmias represent one of leading causes for sudden cardiac death, a significant problem in public health. Noninvasive imaging of cardiac electric activities associated with ventricular arrhythmias plays an important role in better our understanding of the mechanisms and optimizing the treatment options. The present study aims to rigorously validate a novel three-dimensional (3-D) cardiac electrical imaging (3-DCEI) technique with the aid of 3-D intra-cardiac mapping during paced rhythm and ventricular tachycardia (VT) in the rabbit heart. Body surface potentials and intramural bipolar electrical recordings were simultaneously measured in a closed-chest condition in thirteen healthy rabbits. Single-site pacing and dual-site pacing were performed from ventricular walls and septum. VTs and premature ventricular complexes (PVCs) were induced by intravenous norepinephrine (NE). The non-invasively imaged activation sequence correlated well with invasively measured counterparts, with a correlation coefficient of 0.72 and a relative error of 0.30 averaged over all paced beats and NE-induced PVCs and VT beats. The averaged distance from imaged site of initial activation to measured site determined from intra-cardiac mapping was ∼5mm. These promising results suggest that 3-DCEI is feasible to non-invasively localize the origins and image activation sequence of focal ventricular arrhythmias.

  16. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.

  17. Analysis of DSN software anomalies

    NASA Technical Reports Server (NTRS)

    Galorath, D. D.; Hecht, H.; Hecht, M.; Reifer, D. J.

    1981-01-01

    A categorized data base of software errors which were discovered during the various stages of development and operational use of the Deep Space Network DSN/Mark 3 System was developed. A study team identified several existing error classification schemes (taxonomies), prepared a detailed annotated bibliography of the error taxonomy literature, and produced a new classification scheme which was tuned to the DSN anomaly reporting system and encapsulated the work of others. Based upon the DSN/RCI error taxonomy, error data on approximately 1000 reported DSN/Mark 3 anomalies were analyzed, interpreted and classified. Next, error data are summarized and histograms were produced highlighting key tendencies.

  18. Neyman-Pearson classification algorithms and NP receiver operating characteristics

    PubMed Central

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-01-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442

  19. Neyman-Pearson classification algorithms and NP receiver operating characteristics.

    PubMed

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-02-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.

  20. BCI Competition IV – Data Set I: Learning Discriminative Patterns for Self-Paced EEG-Based Motor Imagery Detection

    PubMed Central

    Zhang, Haihong; Guan, Cuntai; Ang, Kai Keng; Wang, Chuanchu

    2012-01-01

    Detecting motor imagery activities versus non-control in brain signals is the basis of self-paced brain-computer interfaces (BCIs), but also poses a considerable challenge to signal processing due to the complex and non-stationary characteristics of motor imagery as well as non-control. This paper presents a self-paced BCI based on a robust learning mechanism that extracts and selects spatio-spectral features for differentiating multiple EEG classes. It also employs a non-linear regression and post-processing technique for predicting the time-series of class labels from the spatio-spectral features. The method was validated in the BCI Competition IV on Dataset I where it produced the lowest prediction error of class labels continuously. This report also presents and discusses analysis of the method using the competition data set. PMID:22347153

  1. Design and Evaluation of Virtual Reality-Based Therapy Games with Dual Focus on Therapeutic Relevance and User Experience for Children with Cerebral Palsy.

    PubMed

    Ni, Lian Ting; Fehlings, Darcy; Biddiss, Elaine

    2014-06-01

    Virtual reality (VR)-based therapy for motor rehabilitation of children with cerebral palsy (CP) is growing in prevalence. Although mainstream active videogames typically offer children an appealing user experience, they are not designed for therapeutic relevance. Conversely, rehabilitation-specific games often struggle to provide an immersive experience that sustains interest. This study aims to design and evaluate two VR-based therapy games for upper and lower limb rehabilitation and to evaluate their efficacy with dual focus on therapeutic relevance and user experience. Three occupational therapists, three physiotherapists, and eight children (8-12 years old), with CP Level I-III on the Gross Motor Function Classification System, evaluated two games for the Microsoft(®) (Redmond, WA) Kinect™ for Windows and completed the System Usability Scale (SUS), Physical Activity Enjoyment Scale (PACES), and custom feedback questionnaires. Children and therapists unanimously agreed on the enjoyment and therapeutic value of the games. Median scores on the PACES were high (6.24±0.95 on the 7-point scale). Therapists considered the system to be of average usability (50th percentile on the SUS). The most prevalent usability issue was detection errors distinguishing the child's movements from the supporting therapist's. The ability to adjust difficulty settings and to focus on targeted goals (e.g., elbow/shoulder extension, weight shifting) was highly valued by therapists. Engaging both therapists and children in a user-centered design approach enabled the development of two VR-based therapy games for upper and lower limb rehabilitation that are dually (a) engaging to the child and (b) therapeutically relevant.

  2. Self-paced preparation for a task switch eliminates attentional inertia but not the performance switch cost.

    PubMed

    Longman, Cai S; Lavric, Aureliu; Monsell, Stephen

    2017-06-01

    The performance overhead associated with changing tasks (the "switch cost") usually diminishes when the task is specified in advance but is rarely eliminated by preparation. A popular account of the "residual" (asymptotic) switch cost is that it reflects "task-set inertia": carry-over of task-set parameters from the preceding trial(s). New evidence for a component of "task-set inertia" comes from eye-tracking, where the location associated with the previously (but no longer) relevant task is fixated preferentially over other irrelevant locations, even when preparation intervals are generous. Might such limits in overcoming task-set inertia in general, and "attentional inertia" in particular, result from suboptimal scheduling of preparation when the time available is outside one's control? In the present study, the stimulus comprised 3 digits located at the points of an invisible triangle, preceded by a central verbal cue specifying which of 3 classification tasks to perform, each consistently applied to just 1 digit location. The digits were presented only when fixation moved away from the cue, thus giving the participant control over preparation time. In contrast to our previous research with experimenter-determined preparation intervals, we found no sign of attentional inertia for the long preparation intervals. Self-paced preparation reduced but did not eliminate the performance switch cost-leaving a clear residual component in both reaction time and error rates. That the scheduling of preparation accounts for some, but not all, components of the residual switch cost, challenges existing accounts of the switch cost, even those which distinguish between preparatory and poststimulus reconfiguration processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803

  4. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.

  5. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    PubMed

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  6. Sources of error in estimating truck traffic from automatic vehicle classification data

    DOT National Transportation Integrated Search

    1998-10-01

    Truck annual average daily traffic estimation errors resulting from sample classification counts are computed in this paper under two scenarios. One scenario investigates an improper factoring procedure that may be used by highway agencies. The study...

  7. Three-Dimensional Transient Natural Convection in a Horizontal Cylinder: A Numerical Analysis

    DTIC Science & Technology

    1980-02-01

    A11D 03 _________ 14. MNITORNG AGNCY AME&AORESS(it different from Controlling Office) IS. SECURITY CLASS. (of this report) -~Th /UNCLASSIFIED AISa . OECL...method for the vorticity and - DD IjANඑ 1473 EDITION OF I NOV6 SS OBSOLETE UNCLASSIFIED SECURITY CLASSIFICATION Of THIS PACE,n bt. Nte, -’ ’..r&IeI

  8. Cardiomyoplasty: first clinical case with new cardiomyostimulator.

    PubMed

    Chekanov, Valeri S; Sands, Duane E; Brown, Conville S; Brum, Fernando; Arzuaga, Pedro; Gava, Sebastian; Eugenio, Ferdinand P; Melamed, Vladimir; Spencer, Howard W

    2002-09-01

    Dynamic cardiomyoplasty was performed in a patient using a new cardio-myostimulator (LD-PACE II) designed to enable a novel stimulation regimen that utilizes a new range of stimulation options, including cessation during sleep. After treatment, left ventricular ejection fraction improved in 24 months from 15% to 25% and New York Heart Association classification improved from class IV to II.

  9. Automated Classification of Phonological Errors in Aphasic Language

    PubMed Central

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  10. ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.

    USGS Publications Warehouse

    Rosenfield, George H.; Fitzpatrick-Lins, Katherine

    1984-01-01

    Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.

  11. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  12. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    ERIC Educational Resources Information Center

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  13. Noninvasive imaging of three-dimensional cardiac activation sequence during pacing and ventricular tachycardia.

    PubMed

    Han, Chengzong; Pogwizd, Steven M; Killingsworth, Cheryl R; He, Bin

    2011-08-01

    Imaging cardiac excitation within ventricular myocardium is important in the treatment of cardiac arrhythmias and might help improve our understanding of arrhythmia mechanisms. This study sought to rigorously assess the imaging performance of a 3-dimensional (3D) cardiac electrical imaging (3DCEI) technique with the aid of 3D intracardiac mapping from up to 216 intramural sites during paced rhythm and norepinephrine (NE)-induced ventricular tachycardia (VT) in the rabbit heart. Body surface potentials and intramural bipolar electrical recordings were simultaneously measured in a closed-chest condition in 13 healthy rabbits. Single-site pacing and dual-site pacing were performed from ventricular walls and septum. VTs and premature ventricular complexes (PVCs) were induced by intravenous NE. Computed tomography images were obtained to construct geometry models. The noninvasively imaged activation sequence correlated well with invasively measured counterpart, with a correlation coefficient of 0.72 ± 0.04, and a relative error of 0.30 ± 0.02 averaged over 520 paced beats as well as 73 NE-induced PVCs and VT beats. All PVCs and VT beats initiated in the subendocardium by a nonreentrant mechanism. The averaged distance from the imaged site of initial activation to the pacing site or site of arrhythmias determined from intracardiac mapping was ∼5 mm. For dual-site pacing, the double origins were identified when they were located at contralateral sides of ventricles or at the lateral wall and the apex. 3DCEI can noninvasively delineate important features of focal or multifocal ventricular excitation. It offers the potential to aid in localizing the origins and imaging activation sequences of ventricular arrhythmias, and to provide noninvasive assessment of the underlying arrhythmia mechanisms. Copyright © 2011 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  14. Noninvasive Imaging of Three-dimensional Cardiac Activation Sequence during Pacing and Ventricular Tachycardia

    PubMed Central

    Han, Chengzong; Pogwizd, Steven M.; Killingsworth, Cheryl R.; He, Bin

    2011-01-01

    Background Imaging cardiac excitation within ventricular myocardium is important in the treatment of cardiac arrhythmias and might help improve our understanding of arrhythmia mechanisms. Objective This study aims to rigorously assess the imaging performance of a three-dimensional (3-D) cardiac electrical imaging (3-DCEI) technique with the aid of 3-D intra-cardiac mapping from up to 216 intramural sites during paced rhythm and norepinephrine (NE) induced ventricular tachycardia (VT) in the rabbit heart. Methods Body surface potentials and intramural bipolar electrical recordings were simultaneously measured in a closed-chest condition in thirteen healthy rabbits. Single-site pacing and dual-site pacing were performed from ventricular walls and septum. VTs and premature ventricular complexes (PVCs) were induced by intravenous NE. Computer tomography images were obtained to construct geometry model. Results The non-invasively imaged activation sequence correlated well with invasively measured counterparts, with a correlation coefficient of 0.72±0.04, and a relative error of 0.30±0.02 averaged over 520 paced beats as well as 73 NE-induced PVCs and VT beats. All PVCs and VT beats initiated in the subendocardium by a nonreentrant mechanism. The averaged distance from imaged site of initial activation to pacing site or site of arrhythmias determined from intra-cardiac mapping was ~5mm. For dual-site pacing, the double origins were identified when they were located at contralateral sides of ventricles or at the lateral wall and the apex. Conclusion 3-DCEI can non-invasively delineate important features of focal or multi-focal ventricular excitation. It offers the potential to aid in localizing the origins and imaging activation sequence of ventricular arrhythmias, and to provide noninvasive assessment of the underlying arrhythmia mechanisms. PMID:21397046

  15. In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, J.E.

    A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains on internal 'U-tube' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds.IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95% confidence levelmore » were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory.Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM.Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less

  16. In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KLEIN, JAMES

    A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains an internal ''U-tube'' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds. IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95 percentmore » confidence level were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory. Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM. Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less

  17. Effectiveness of Global Features for Automatic Medical Image Classification and Retrieval – the experiences of OHSU at ImageCLEFmed

    PubMed Central

    Kalpathy-Cramer, Jayashree; Hersh, William

    2008-01-01

    In 2006 and 2007, Oregon Health & Science University (OHSU) participated in the automatic image annotation task for medical images at ImageCLEF, an annual international benchmarking event that is part of the Cross Language Evaluation Forum (CLEF). The goal of the automatic annotation task was to classify 1000 test images based on the Image Retrieval in Medical Applications (IRMA) code, given a set of 10,000 training images. There were 116 distinct classes in 2006 and 2007. We evaluated the efficacy of a variety of primarily global features for this classification task. These included features based on histograms, gray level correlation matrices and the gist technique. A multitude of classifiers including k-nearest neighbors, two-level neural networks, support vector machines, and maximum likelihood classifiers were evaluated. Our official error rates for the 1000 test images were 26% in 2006 using the flat classification structure. The error count in 2007 was 67.8 using the hierarchical classification error computation based on the IRMA code in 2007. Confusion matrices as well as clustering experiments were used to identify visually similar classes. The use of the IRMA code did not help us in the classification task as the semantic hierarchy of the IRMA classes did not correspond well with the hierarchy based on clustering of image features that we used. Our most frequent misclassification errors were along the view axis. Subsequent experiments based on a two-stage classification system decreased our error rate to 19.8% for the 2006 dataset and our error count to 55.4 for the 2007 data. PMID:19884953

  18. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Toward attenuating the impact of arm positions on electromyography pattern-recognition based motion classification in transradial amputees

    PubMed Central

    2012-01-01

    Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049

  20. Longitudinal assessment of reflexive and volitional saccades in Niemann-Pick Type C disease during treatment with miglustat.

    PubMed

    Abel, Larry A; Walterfang, Mark; Stainer, Matthew J; Bowman, Elizabeth A; Velakoulis, Dennis

    2015-12-21

    Niemann-Pick Type C disease (NPC), is an autosomal recessive neurovisceral disorder of lipid metabolism. One characteristic feature of NPC is a vertical supranuclear gaze palsy particularly affecting saccades. However, horizontal saccades are also impaired and as a consequence a parameter related to horizontal peak saccadic velocity was used as an outcome measure in the clinical trial of miglustat, the first drug approved in several jurisdictions for the treatment of NPC. As NPC-related neuropathology is widespread in the brain we examined a wider range of horizontal saccade parameters and to determine whether these showed treatment-related improvement and, if so, if this was maintained over time. Nine adult NPC patients participated in the study; 8 were treated with miglustat for periods between 33 and 61 months. Data were available for 2 patients before their treatment commenced and 1 patient was untreated. Tasks included reflexive saccades, antisaccades and self-paced saccades, with eye movements recorded by an infrared reflectance eye tracker. Parameters analysed were reflexive saccade gain and latency, asymptotic peak saccadic velocity, HSEM-α (the slope of the peak duration-amplitude regression line), antisaccade error percentage, self-paced saccade count and time between refixations on the self-paced task. Data were analysed by plotting the change from baseline as a proportion of the baseline value at each test time and, where multiple data values were available at each session, by linear mixed effects (LME) analysis. Examination of change plots suggested some modest sustained improvement in gain, no consistent changes in asymptotic peak velocity or HSEM-α, deterioration in the already poor antisaccade error rate and sustained improvement in self-paced saccade rate. LME analysis showed statistically significant improvement in gain and the interval between self-paced saccades, with differences over time between treated and untreated patients. Both qualitative examination of change scores and statistical evaluation with LME analysis support the idea that some saccadic parameters are robust indicators of efficacy, and that the variability observed across measures may indicate locally different effects of neurodegeneration and of drug actions.

  1. Federal Government Information Systems Security Management and Governance Are Pacing Factors for Innovation

    ERIC Educational Resources Information Center

    Edwards, Gregory

    2011-01-01

    Security incidents resulting from human error or subversive actions have caused major financial losses, reduced business productivity or efficiency, and threatened national security. Some research suggests that information system security frameworks lack emphasis on human involvement as a significant cause for security problems in a rapidly…

  2. Algorithmic Classification of Five Characteristic Types of Paraphasias.

    PubMed

    Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven

    2016-12-01

    This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.

  3. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  4. Current Assessment and Classification of Suicidal Phenomena using the FDA 2012 Draft Guidance Document on Suicide Assessment: A Critical Review.

    PubMed

    Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett

    2014-09-01

    Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.

  5. Estimating the Illuminant Color from the Shading of a Smooth Surface

    DTIC Science & Technology

    1988-08-01

    OV 69 is OSSO1LEtTE UNCLASS IF IED SECURITY CLASSIFICATION OPV THIS PACE (9%@n Dae. ill’ft~h Block 20 continued Light reflection from a surface is...perceive qualitatively the scene illuminant quite well. Even w ,cn we have difficulty judging the "true" color of a piece of fabrics under certain indoor

  6. An experimental study of interstitial lung tissue classification in HRCT images using ANN and role of cost functions

    NASA Astrophysics Data System (ADS)

    Dash, Jatindra K.; Kale, Mandar; Mukhopadhyay, Sudipta; Khandelwal, Niranjan; Prabhakar, Nidhi; Garg, Mandeep; Kalra, Naveen

    2017-03-01

    In this paper, we investigate the effect of the error criteria used during a training phase of the artificial neural network (ANN) on the accuracy of the classifier for classification of lung tissues affected with Interstitial Lung Diseases (ILD). Mean square error (MSE) and the cross-entropy (CE) criteria are chosen being most popular choice in state-of-the-art implementations. The classification experiment performed on the six interstitial lung disease (ILD) patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Micronodules, Fibrosis and Healthy from MedGIFT database. The texture features from an arbitrary region of interest (AROI) are extracted using Gabor filter. Two different neural networks are trained with the scaled conjugate gradient back propagation algorithm with MSE and CE error criteria function respectively for weight updation. Performance is evaluated in terms of average accuracy of these classifiers using 4 fold cross-validation. Each network is trained for five times for each fold with randomly initialized weight vectors and accuracies are computed. Significant improvement in classification accuracy is observed when ANN is trained by using CE (67.27%) as error function compared to MSE (63.60%). Moreover, standard deviation of the classification accuracy for the network trained with CE (6.69) error criteria is found less as compared to network trained with MSE (10.32) criteria.

  7. Dynamic aspects of prescription drug use in an elderly population.

    PubMed Central

    Stuart, B; Coulson, N E

    1993-01-01

    OBJECTIVE. This study explores longitudinal patterns in outpatient prescription drug use in an elderly population. DATA SOURCES/STUDY SETTING. Enrollment records and prescription drug claims were obtained for a sample of elderly Pennsylvanians (N = 27,301) who had enrolled in the Pharmaceutical Assistance Contract for the Elderly (PACE) program at any time between July 1984 and June 1987. Study Design. The study tracks monthly prescription fill rates for sampled PACE beneficiaries from their initial enrollment month through disenrollment, death, or the end of the study (whichever occurred first). We specify two-part multivariate models to assess the effect of calendar time, length of time in the PACE program, and progression to disenrollment or death both on the probability of any prescription use and on the level of use among those who filled at least one prescription claim per month. Control variables include age, gender, race, income, residence, and marital status. DATA COLLECTION/EXTRACTION METHODS. Data were extracted from administrative files maintained by the PACE program, checked for errors, and then formatted as person-month records. PRINCIPAL FINDINGS/CONCLUSIONS. We find a strong positive relationship between drug use and the length of time persons are PACE-enrolled. Persons whose death occurs within a year have much higher prescription utilization rates than do persons whose death is at least a year away, and the differential increases as death nears. Persons who fail to renew PACE coverage use significantly fewer prescription drugs in the year prior to disenrollment. Holding age and other factors constant, we find that average levels of prescription use actually declined over the study period. PMID:8514502

  8. Particle Swarm Optimization approach to defect detection in armour ceramics.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  9. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  10. Error Detection in Mechanized Classification Systems

    ERIC Educational Resources Information Center

    Hoyle, W. G.

    1976-01-01

    When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…

  11. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    NASA Astrophysics Data System (ADS)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  12. Information analysis of a spatial database for ecological land classification

    NASA Technical Reports Server (NTRS)

    Davis, Frank W.; Dozier, Jeff

    1990-01-01

    An ecological land classification was developed for a complex region in southern California using geographic information system techniques of map overlay and contingency table analysis. Land classes were identified by mutual information analysis of vegetation pattern in relation to other mapped environmental variables. The analysis was weakened by map errors, especially errors in the digital elevation data. Nevertheless, the resulting land classification was ecologically reasonable and performed well when tested with higher quality data from the region.

  13. Efficiency of timing delays and electrode positions in optimization of biventricular pacing: a simulation study.

    PubMed

    Miri, Raz; Graf, Iulia M; Dössel, Olaf

    2009-11-01

    Electrode positions and timing delays influence the efficacy of biventricular pacing (BVP). Accordingly, this study focuses on BVP optimization, using a detailed 3-D electrophysiological model of the human heart, which is adapted to patient-specific anatomy and pathophysiology. The research is effectuated on ten heart models with left bundle branch block and myocardial infarction derived from magnetic resonance and computed tomography data. Cardiac electrical activity is simulated with the ten Tusscher cell model and adaptive cellular automaton at physiological and pathological conduction levels. The optimization methods are based on a comparison between the electrical response of the healthy and diseased heart models, measured in terms of root mean square error (E(RMS)) of the excitation front and the QRS duration error (E(QRS)). Intra- and intermethod associations of the pacing electrodes and timing delays variables were analyzed with statistical methods, i.e., t -test for dependent data, one-way analysis of variance for electrode pairs, and Pearson model for equivalent parameters from the two optimization methods. The results indicate that lateral the left ventricle and the upper or middle septal area are frequently (60% of cases) the optimal positions of the left and right electrodes, respectively. Statistical analysis proves that the two optimization methods are in good agreement. In conclusion, a noninvasive preoperative BVP optimization strategy based on computer simulations can be used to identify the most beneficial patient-specific electrode configuration and timing delays.

  14. Structural MRI-based detection of Alzheimer's disease using feature ranking and classification error.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Farokhian, Farnaz; Yang, Chunlan; Matsuda, Hiroshi

    2016-12-01

    This paper presents an automatic computer-aided diagnosis (CAD) system based on feature ranking for detection of Alzheimer's disease (AD) using structural magnetic resonance imaging (sMRI) data. The proposed CAD system is composed of four systematic stages. First, global and local differences in the gray matter (GM) of AD patients compared to the GM of healthy controls (HCs) are analyzed using a voxel-based morphometry technique. The aim is to identify significant local differences in the volume of GM as volumes of interests (VOIs). Second, the voxel intensity values of the VOIs are extracted as raw features. Third, the raw features are ranked using a seven-feature ranking method, namely, statistical dependency (SD), mutual information (MI), information gain (IG), Pearson's correlation coefficient (PCC), t-test score (TS), Fisher's criterion (FC), and the Gini index (GI). The features with higher scores are more discriminative. To determine the number of top features, the estimated classification error based on training set made up of the AD and HC groups is calculated, with the vector size that minimized this error selected as the top discriminative feature. Fourth, the classification is performed using a support vector machine (SVM). In addition, a data fusion approach among feature ranking methods is introduced to improve the classification performance. The proposed method is evaluated using a data-set from ADNI (130 AD and 130 HC) with 10-fold cross-validation. The classification accuracy of the proposed automatic system for the diagnosis of AD is up to 92.48% using the sMRI data. An automatic CAD system for the classification of AD based on feature-ranking method and classification errors is proposed. In this regard, seven-feature ranking methods (i.e., SD, MI, IG, PCC, TS, FC, and GI) are evaluated. The optimal size of top discriminative features is determined by the classification error estimation in the training phase. The experimental results indicate that the performance of the proposed system is comparative to that of state-of-the-art classification models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Influence of Pedometer Position on Pedometer Accuracy at Various Walking Speeds: A Comparative Study

    PubMed Central

    Lovis, Christian

    2016-01-01

    Background Demographic growth in conjunction with the rise of chronic diseases is increasing the pressure on health care systems in most OECD countries. Physical activity is known to be an essential factor in improving or maintaining good health. Walking is especially recommended, as it is an activity that can easily be performed by most people without constraints. Pedometers have been extensively used as an incentive to motivate people to become more active. However, a recognized problem with these devices is their diminishing accuracy associated with decreased walking speed. The arrival on the consumer market of new devices, worn indifferently either at the waist, wrist, or as a necklace, gives rise to new questions regarding their accuracy at these different positions. Objective Our objective was to assess the performance of 4 pedometers (iHealth activity monitor, Withings Pulse O2, Misfit Shine, and Garmin vívofit) and compare their accuracy according to their position worn, and at various walking speeds. Methods We conducted this study in a controlled environment with 21 healthy adults required to walk 100 m at 3 different paces (0.4 m/s, 0.6 m/s, and 0.8 m/s) regulated by means of a string attached between their legs at the level of their ankles and a metronome ticking the cadence. To obtain baseline values, we asked the participants to walk 200 m at their own pace. Results A decrease of accuracy was positively correlated with reduced speed for all pedometers (12% mean error at self-selected pace, 27% mean error at 0.8 m/s, 52% mean error at 0.6 m/s, and 76% mean error at 0.4 m/s). Although the position of the pedometer on the person did not significantly influence its accuracy, some interesting tendencies can be highlighted in 2 settings: (1) positioning the pedometer at the waist at a speed greater than 0.8 m/s or as a necklace at preferred speed tended to produce lower mean errors than at the wrist position; and (2) at a slow speed (0.4 m/s), pedometers worn at the wrist tended to produce a lower mean error than in the other positions. Conclusions At all positions, all tested pedometers generated significant errors at slow speeds and therefore cannot be used reliably to evaluate the amount of physical activity for people walking slower than 0.6 m/s (2.16 km/h, or 1.24 mph). At slow speeds, the better accuracy observed with pedometers worn at the wrist could constitute a valuable line of inquiry for the future development of devices adapted to elderly people. PMID:27713114

  16. Research into Practice: What Do You Really Know about Learning and Development?

    ERIC Educational Resources Information Center

    Harlin, Rebecca P.

    2008-01-01

    The assumptions about children's development are challenged by recent research findings that show learning begins at an earlier age and proceeds at a different pace than expected. Sometimes researchers find that they have misunderstood children's cognitive, social, and physical development due to errors in measurement (faulty tests or tools),…

  17. Agreement Attraction in Comprehension: Representations and Processes

    ERIC Educational Resources Information Center

    Wagers, Matthew W.; Lau, Ellen F.; Phillips, Colin

    2009-01-01

    Much work has demonstrated so-called attraction errors in the production of subject-verb agreement (e.g., "The key to the cabinets are on the table", [Bock, J. K., & Miller, C. A. (1991). "Broken agreement." "Cognitive Psychology, 23", 45-93]), in which a verb erroneously agrees with an intervening noun. Six self-paced reading experiments examined…

  18. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  19. Benefits of Permanent His Bundle Pacing Combined With Atrioventricular Node Ablation in Atrial Fibrillation Patients With Heart Failure With Both Preserved and Reduced Left Ventricular Ejection Fraction.

    PubMed

    Huang, Weijian; Su, Lan; Wu, Shengjie; Xu, Lei; Xiao, Fangyi; Zhou, Xiaohong; Ellenbogen, Kenneth A

    2017-04-01

    Clinical benefits from His bundle pacing (HBP) in heart failure patients with preserved and reduced left ventricular ejection fraction are still inconclusive. This study evaluated clinical outcomes of permanent HBP in atrial fibrillation patients with narrow QRS who underwent atrioventricular node ablation for heart failure symptoms despite rate control by medication. The study enrolled 52 consecutive heart failure patients who underwent attempted atrioventricular node ablation and HBP for symptomatic atrial fibrillation. Echocardiographic left ventricular ejection fraction and left ventricular end-diastolic dimension, New York Heart Association classification and use of diuretics for heart failure were assessed during follow-up visits after permanent HBP. Of 52 patients, 42 patients (80.8%) received permanent HBP and atrioventricular node ablation with a median 20-month follow-up. There was no significant change between native and paced QRS duration (107.1±25.8 versus 105.3±23.9 milliseconds, P =0.07). Left ventricular end-diastolic dimension decreased from the baseline ( P <0.001), and left ventricular ejection fraction increased from baseline ( P <0.001) in patients with a greater improvement in heart failure with reduced ejection fraction patients (N=20) than in heart failure with preserved ejection fraction patients (N=22). New York Heart Association classification improved from a baseline 2.9±0.6 to 1.4±0.4 after HBP in heart failure with reduced ejection fraction patients and from a baseline 2.7±0.6 to 1.4±0.5 after HBP in heart failure with preserved ejection fraction patients. After 1 year of HBP, the numbers of patients who used diuretics for heart failure decreased significantly ( P <0.001) when compared to the baseline diuretics use. Permanent HBP post-atrioventricular node ablation significantly improved echocardiographic measurements and New York Heart Association classification and reduced diuretics use for heart failure management in atrial fibrillation patients with narrow QRS who suffered from heart failure with preserved or reduced ejection fraction. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  20. Individuals underestimate moderate and vigorous intensity physical activity.

    PubMed

    Canning, Karissa L; Brown, Ruth E; Jamnik, Veronica K; Salmon, Art; Ardern, Chris I; Kuk, Jennifer L

    2014-01-01

    It is unclear whether the common physical activity (PA) intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI) classifications. The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax) when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05). When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.

  1. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  2. Clustering of reads with alignment-free measures and quality values.

    PubMed

    Comin, Matteo; Leoni, Andrea; Schimd, Michele

    2015-01-01

    The data volume generated by Next-Generation Sequencing (NGS) technologies is growing at a pace that is now challenging the storage and data processing capacities of modern computer systems. In this context an important aspect is the reduction of data complexity by collapsing redundant reads in a single cluster to improve the run time, memory requirements, and quality of post-processing steps like assembly and error correction. Several alignment-free measures, based on k-mers counts, have been used to cluster reads. Quality scores produced by NGS platforms are fundamental for various analysis of NGS data like reads mapping and error detection. Moreover future-generation sequencing platforms will produce long reads but with a large number of erroneous bases (up to 15 %). In this scenario it will be fundamental to exploit quality value information within the alignment-free framework. To the best of our knowledge this is the first study that incorporates quality value information and k-mers counts, in the context of alignment-free measures, for the comparison of reads data. Based on this principles, in this paper we present a family of alignment-free measures called D (q) -type. A set of experiments on simulated and real reads data confirms that the new measures are superior to other classical alignment-free statistics, especially when erroneous reads are considered. Also results on de novo assembly and metagenomic reads classification show that the introduction of quality values improves over standard alignment-free measures. These statistics are implemented in a software called QCluster (http://www.dei.unipd.it/~ciompin/main/qcluster.html).

  3. Practical Procedures for Constructing Mastery Tests to Minimize Errors of Classification and to Maximize or Optimize Decision Reliability.

    ERIC Educational Resources Information Center

    Byars, Alvin Gregg

    The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…

  4. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  5. An experimental evaluation of the incidence of fitness-function/search-algorithm combinations on the classification performance of myoelectric control systems with iPCA tuning

    PubMed Central

    2013-01-01

    Background The information of electromyographic signals can be used by Myoelectric Control Systems (MCSs) to actuate prostheses. These devices allow the performing of movements that cannot be carried out by persons with amputated limbs. The state of the art in the development of MCSs is based on the use of individual principal component analysis (iPCA) as a stage of pre-processing of the classifiers. The iPCA pre-processing implies an optimization stage which has not yet been deeply explored. Methods The present study considers two factors in the iPCA stage: namely A (the fitness function), and B (the search algorithm). The A factor comprises two levels, namely A1 (the classification error) and A2 (the correlation factor). Otherwise, the B factor has four levels, specifically B1 (the Sequential Forward Selection, SFS), B2 (the Sequential Floating Forward Selection, SFFS), B3 (Artificial Bee Colony, ABC), and B4 (Particle Swarm Optimization, PSO). This work evaluates the incidence of each one of the eight possible combinations between A and B factors over the classification error of the MCS. Results A two factor ANOVA was performed on the computed classification errors and determined that: (1) the interactive effects over the classification error are not significative (F0.01,3,72 = 4.0659 > f AB  = 0.09), (2) the levels of factor A have significative effects on the classification error (F0.02,1,72 = 5.0162 < f A  = 6.56), and (3) the levels of factor B over the classification error are not significative (F0.01,3,72 = 4.0659 > f B  = 0.08). Conclusions Considering the classification performance we found a superiority of using the factor A2 in combination with any of the levels of factor B. With respect to the time performance the analysis suggests that the PSO algorithm is at least 14 percent better than its best competitor. The latter behavior has been observed for a particular configuration set of parameters in the search algorithms. Future works will investigate the effect of these parameters in the classification performance, such as length of the reduced size vector, number of particles and bees used during optimal search, the cognitive parameters in the PSO algorithm as well as the limit of cycles to improve a solution in the ABC algorithm. PMID:24369728

  6. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  7. Representing number in the real-time processing of agreement: self-paced reading evidence from Arabic

    PubMed Central

    Tucker, Matthew A.; Idrissi, Ali; Almeida, Diogo

    2015-01-01

    In the processing of subject-verb agreement, non-subject plural nouns following a singular subject sometimes “attract” the agreement with the verb, despite not being grammatically licensed to do so. This phenomenon generates agreement errors in production and an increased tendency to fail to notice such errors in comprehension, thereby providing a window into the representation of grammatical number in working memory during sentence processing. Research in this topic, however, is primarily done in related languages with similar agreement systems. In order to increase the cross-linguistic coverage of the processing of agreement, we conducted a self-paced reading study in Modern Standard Arabic. We report robust agreement attraction errors in relative clauses, a configuration not particularly conducive to the generation of such errors for all possible lexicalizations. In particular, we examined the speed with which readers retrieve a subject controller for both grammatical and ungrammatical agreeing verbs in sentences where verbs are preceded by two NPs, one of which is a local non-subject NP that can act as a distractor for the successful resolution of subject-verb agreement. Our results suggest that the frequency of errors is modulated by the kind of plural formation strategy used on the attractor noun: nouns which form plurals by suffixation condition high rates of attraction, whereas nouns which form their plurals by internal vowel change (ablaut) generate lower rates of errors and reading-time attraction effects of smaller magnitudes. Furthermore, we show some evidence that these agreement attraction effects are mostly contained in the right tail of reaction time distributions. We also present modeling data in the ACT-R framework which supports a view of these ablauting patterns wherein they are differentially specified for number and evaluate the consequences of possible representations for theories of grammar and parsing. PMID:25914651

  8. How should children with speech sound disorders be classified? A review and critical evaluation of current classification systems.

    PubMed

    Waring, R; Knight, R

    2013-01-01

    Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of theoretically differing classification systems have been proposed based on either an aetiological (medical) approach, a descriptive-linguistic approach or a processing approach. To describe and review the supporting evidence, and to provide a critical evaluation of the current childhood SSD classification systems. Descriptions of the major specific approaches to classification are reviewed and research papers supporting the reliability and validity of the systems are evaluated. Three specific paediatric SSD classification systems; the aetiologic-based Speech Disorders Classification System, the descriptive-linguistic Differential Diagnosis system, and the processing-based Psycholinguistic Framework are identified as potentially useful in classifying children with SSD into homogeneous subgroups. The Differential Diagnosis system has a growing body of empirical support from clinical population studies, across language error pattern studies and treatment efficacy studies. The Speech Disorders Classification System is currently a research tool with eight proposed subgroups. The Psycholinguistic Framework is a potential bridge to linking cause and surface level speech errors. There is a need for a universally agreed-upon classification system that is useful to clinicians and researchers. The resulting classification system needs to be robust, reliable and valid. A universal classification system would allow for improved tailoring of treatments to subgroups of SSD which may, in turn, lead to improved treatment efficacy. © 2012 Royal College of Speech and Language Therapists.

  9. The application of Aronson's taxonomy to medication errors in nursing.

    PubMed

    Johnson, Maree; Young, Helen

    2011-01-01

    Medication administration is a frequent nursing activity that is prone to error. In this study of 318 self-reported medication incidents (including near misses), very few resulted in patient harm-7% required intervention or prolonged hospitalization or caused temporary harm. Aronson's classification system provided an excellent framework for analysis of the incidents with a close connection between the type of error and the change strategy to minimize medication incidents. Taking a behavioral approach to medication error classification has provided helpful strategies for nurses such as nurse-call cards on patient lockers when patients are absent and checking of medication sign-off by outgoing and incoming staff at handover.

  10. Center for Seismic Studies Final Technical Report, October 1992 through October 1993

    DTIC Science & Technology

    1994-02-07

    SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF ABSTRACT...Upper limit of depth error as a function of mb for estimates based on P and S waves for three netowrks : GSETr-2, ALPHA, and ALPHA + a 50 station...U 4A 4 U 4S as 1 I I I Figure 42: Upper limit of depth error as a function of mb for estimatesbased on P and S waves for three netowrk : GSETT-2o ALPHA

  11. Initiative in Soviet Air Force Tactics and Decision Making.

    DTIC Science & Technology

    1986-06-01

    34 [Ref. 7: p. 1211 [Ref. 8: p.197] The issue is do modern Soviet Air Force command style and tactics allow for the freidom of actions or initiative...Approved for public release; distribution is unlimited. ;:.,,. ,,- .,, ... ., , V SECURITY CLASSIFICATION OF THIS PACE "" ? /"/’ 22 - REPORT DOCUMENTATION...REPORT 2b. DECLASSiFICATIONiDOWNGRAOING SCHEDULE Approved for public release; distribution is unlimited. 4 PERFORMING ORGANIZATION REPORT NUMBER(S) S

  12. Assimilation of a knowledge base and physical models to reduce errors in passive-microwave classifications of sea ice

    NASA Technical Reports Server (NTRS)

    Maslanik, J. A.; Key, J.

    1992-01-01

    An expert system framework has been developed to classify sea ice types using satellite passive microwave data, an operational classification algorithm, spatial and temporal information, ice types estimated from a dynamic-thermodynamic model, output from a neural network that detects the onset of melt, and knowledge about season and region. The rule base imposes boundary conditions upon the ice classification, modifies parameters in the ice algorithm, determines a `confidence' measure for the classified data, and under certain conditions, replaces the algorithm output with model output. Results demonstrate the potential power of such a system for minimizing overall error in the classification and for providing non-expert data users with a means of assessing the usefulness of the classification results for their applications.

  13. Data-driven advice for applying machine learning to bioinformatics problems

    PubMed Central

    Olson, Randal S.; La Cava, William; Mustahsan, Zairah; Varik, Akshay; Moore, Jason H.

    2017-01-01

    As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems. PMID:29218881

  14. Spelling in adolescents with dyslexia: errors and modes of assessment.

    PubMed

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.

  15. A Guide for Setting the Cut-Scores to Minimize Weighted Classification Errors in Test Batteries

    ERIC Educational Resources Information Center

    Grabovsky, Irina; Wainer, Howard

    2017-01-01

    In this article, we extend the methodology of the Cut-Score Operating Function that we introduced previously and apply it to a testing scenario with multiple independent components and different testing policies. We derive analytically the overall classification error rate for a test battery under the policy when several retakes are allowed for…

  16. [Classifications in forensic medicine and their logical basis].

    PubMed

    Kovalev, A V; Shmarov, L A; Ten'kov, A A

    2014-01-01

    The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals.

  17. Sensitivity analysis of the GEMS soil organic carbon model to land cover land use classification uncertainties under different climate scenarios in Senegal

    USGS Publications Warehouse

    Dieye, A.M.; Roy, David P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.

    2012-01-01

    Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.

  18. Evaluating data mining algorithms using molecular dynamics trajectories.

    PubMed

    Tatsis, Vasileios A; Tjortjis, Christos; Tzirakis, Panagiotis

    2013-01-01

    Molecular dynamics simulations provide a sample of a molecule's conformational space. Experiments on the mus time scale, resulting in large amounts of data, are nowadays routine. Data mining techniques such as classification provide a way to analyse such data. In this work, we evaluate and compare several classification algorithms using three data sets which resulted from computer simulations, of a potential enzyme mimetic biomolecule. We evaluated 65 classifiers available in the well-known data mining toolkit Weka, using 'classification' errors to assess algorithmic performance. Results suggest that: (i) 'meta' classifiers perform better than the other groups, when applied to molecular dynamics data sets; (ii) Random Forest and Rotation Forest are the best classifiers for all three data sets; and (iii) classification via clustering yields the highest classification error. Our findings are consistent with bibliographic evidence, suggesting a 'roadmap' for dealing with such data.

  19. Ultrastructure Processing and Environmental Stability of Advanced Structural and Electronic Materials

    DTIC Science & Technology

    1992-08-31

    NC r") Form 1473, JUN 86 Previous editions are obsolete SECURITY CLASSIFICATION OF THIS PACE I 18. Subject Terms (Continued) I analysis, aging , band...detail the several steps involved in the processing of sol-gel derived optical silicas: I 1) mixing, 2) casting, 3) gelation, 4) aging , 5) drying, 6...ultrastructurcs, such as for doping applications and laser-enhanced densification. The possible disadvantages discussed in this Chapter are inherent

  20. Evaluation criteria for software classification inventories, accuracies, and maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1976-01-01

    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  1. The search for structure - Object classification in large data sets. [for astronomers

    NASA Technical Reports Server (NTRS)

    Kurtz, Michael J.

    1988-01-01

    Research concerning object classifications schemes are reviewed, focusing on large data sets. Classification techniques are discussed, including syntactic, decision theoretic methods, fuzzy techniques, and stochastic and fuzzy grammars. Consideration is given to the automation of MK classification (Morgan and Keenan, 1973) and other problems associated with the classification of spectra. In addition, the classification of galaxies is examined, including the problems of systematic errors, blended objects, galaxy types, and galaxy clusters.

  2. Procedural Error and Task Interruption

    DTIC Science & Technology

    2016-09-30

    red for research on errors and individual differences . Results indicate predictive validity for fluid intelligence and specifi c forms of work...TERMS procedural error, task interruption, individual differences , fluid intelligence, sleep deprivation 16. SECURITY CLASSIFICATION OF: 17...and individual differences . It generates rich data on several kinds of errors, including procedural errors in which steps are skipped or repeated

  3. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  4. A research of selected textural features for detection of asbestos-cement roofing sheets using orthoimages

    NASA Astrophysics Data System (ADS)

    Książek, Judyta

    2015-10-01

    At present, there has been a great interest in the development of texture based image classification methods in many different areas. This study presents the results of research carried out to assess the usefulness of selected textural features for detection of asbestos-cement roofs in orthophotomap classification. Two different orthophotomaps of southern Poland (with ground resolution: 5 cm and 25 cm) were used. On both orthoimages representative samples for two classes: asbestos-cement roofing sheets and other roofing materials were selected. Estimation of texture analysis usefulness was conducted using machine learning methods based on decision trees (C5.0 algorithm). For this purpose, various sets of texture parameters were calculated in MaZda software. During the calculation of decision trees different numbers of texture parameters groups were considered. In order to obtain the best settings for decision trees models cross-validation was performed. Decision trees models with the lowest mean classification error were selected. The accuracy of the classification was held based on validation data sets, which were not used for the classification learning. For 5 cm ground resolution samples, the lowest mean classification error was 15.6%. The lowest mean classification error in the case of 25 cm ground resolution was 20.0%. The obtained results confirm potential usefulness of the texture parameter image processing for detection of asbestos-cement roofing sheets. In order to improve the accuracy another extended study should be considered in which additional textural features as well as spectral characteristics should be analyzed.

  5. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  6. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    PubMed

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Wildlife management by habitat units: A preliminary plan of action

    NASA Technical Reports Server (NTRS)

    Frentress, C. D.; Frye, R. G.

    1975-01-01

    Procedures for yielding vegetation type maps were developed using LANDSAT data and a computer assisted classification analysis (LARSYS) to assist in managing populations of wildlife species by defined area units. Ground cover in Travis County, Texas was classified on two occasions using a modified version of the unsupervised approach to classification. The first classification produced a total of 17 classes. Examination revealed that further grouping was justified. A second analysis produced 10 classes which were displayed on printouts which were later color-coded. The final classification was 82 percent accurate. While the classification map appeared to satisfactorily depict the existing vegetation, two classes were determined to contain significant error. The major sources of error could have been eliminated by stratifying cluster sites more closely among previously mapped soil associations that are identified with particular plant associations and by precisely defining class nomenclature using established criteria early in the analysis.

  8. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  9. An Analysis of the Economic Assumptions Underlying Fiscal Plans FY1981 - FY1984.

    DTIC Science & Technology

    1986-06-01

    OF THE ECONOMIC ASSUMPTIONS UNDERLYING FISCAL PLANS FY1981 - FY1984 by Robert Welch Beck June 1986 Thesis Advisor: P. M. CARRICK Approved for public ...DOWGRDIN SHEDLEApproved for public releace; it - 2b ECLSSIICAIONI DWNGAD G SHEDLEbut ion is unlimited. 4! PERFORMING ORGANIZATION REPORT NUMBER(S) S...SECURITY CLASSIFICATION OF T4𔃿 PAC~E All other editions are obsolete Approved for public release; distribution is unlimited. An Analysis of the

  10. Rhythm production at school entry as a predictor of poor reading and spelling at the end of first grade.

    PubMed

    Lundetræ, Kjersti; Thomson, Jenny M

    2018-01-01

    Rhythm plays an organisational role in the prosody and phonology of language, and children with literacy difficulties have been found to demonstrate poor rhythmic perception. This study explored whether students' performance on a simple rhythm task at school entry could serve as a predictor of whether they would face difficulties in word reading and spelling at the end of grade 1. The participants were 479 Norwegian 6-year-old first graders randomized as controls in the longitudinal RCT on track (n = 1171). Rhythmic timing and pre-reading skills were tested individually at school entry on a digital tablet. On the rhythm task, the students were told to tap a drum appearing on the screen to two different rhythms (2 Hz paced and 1.5 Hz paced). Children's responses were recorded as they tapped on the screen with their index finger. Significant group differences were found in rhythm tapping ability measured at school entry, when groups were defined upon whether children went on to score above or below the 20th percentile reading and spelling thresholds in national assessment tests at the end of grade one. Inclusion of the school-entry rhythmic tapping measure into a model of classification accuracy for above or below threshold reading and spelling improved accuracy of classification by 6.2 and 9.2% respectively.

  11. PACE 2: Pricing and Cost Estimating Handbook

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.; Shepherd, T.

    1977-01-01

    An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.

  12. Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm

    NASA Astrophysics Data System (ADS)

    Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.

    2017-01-01

    This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.

  13. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  14. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective.

    PubMed

    Young, Kimberly S; Brand, Matthias

    2017-01-01

    Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model.

  15. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective

    PubMed Central

    Young, Kimberly S.; Brand, Matthias

    2017-01-01

    Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model. PMID:29104555

  16. LACIE performance predictor FOC users manual

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.

  17. Sensitivity of geographic information system outputs to errors in remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.

    1981-01-01

    The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.

  18. Error Analysis in Composition of Iranian Lower Intermediate Students

    ERIC Educational Resources Information Center

    Taghavi, Mehdi

    2012-01-01

    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  19. Self-paced model learning for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  20. It Pays to Go Off-Track: Practicing with Error-Augmenting Haptic Feedback Facilitates Learning of a Curve-Tracing Task

    PubMed Central

    Williams, Camille K.; Tremblay, Luc; Carnahan, Heather

    2016-01-01

    Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937

  1. Combining multiple decisions: applications to bioinformatics

    NASA Astrophysics Data System (ADS)

    Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.

  2. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  3. Exception handling for sensor fusion

    NASA Astrophysics Data System (ADS)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  4. Using beta binomials to estimate classification uncertainty for ensemble models.

    PubMed

    Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin

    2014-01-01

    Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.

  5. Stress Indices and Flexibility Factors for 90-Degree Piping Elbows with Straight Pipe Extensions.

    DTIC Science & Technology

    1982-02-01

    Laboratory, Oak Ridge, Tennessee (March 1972). 5. The M.W. Kellogg Company , Design of Piping Systems, Second Edition, John Wiley and Sons, Inc., New York (1964...FLEXIBILITY FACTORS FOR 90-DEGREE PIPING ELBOWS WITH STRAIGHT PIPE EXTENSIONS 6. PERFORMING OrG. REPORT NUMBER = 7. AUTHOR(e S . CONTRACT OR GRANT NUMBER(e...UNCLASSIFIED S /N 0102-LF-014-6601 SECURITY CLAUIFICAION OF THII PAGE (Sie. Det Shtee.E) SECURITY CLASSIFICATION OF THIS PACE (When Does Sat* .*) (Block 20

  6. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  7. From Classification to Epilepsy Ontology and Informatics

    PubMed Central

    Zhang, Guo-Qiang; Sahoo, Satya S; Lhatoo, Samden D

    2012-01-01

    Summary The 2010 International League Against Epilepsy (ILAE) classification and terminology commission report proposed a much needed departure from previous classifications to incorporate advances in molecular biology, neuroimaging, and genetics. It proposed an interim classification and defined two key requirements that need to be satisfied. The first is the ability to classify epilepsy in dimensions according to a variety of purposes including clinical research, patient care, and drug discovery. The second is the ability of the classification system to evolve with new discoveries. Multi-dimensionality and flexibility are crucial to the success of any future classification. In addition, a successful classification system must play a central role in the rapidly growing field of epilepsy informatics. An epilepsy ontology, based on classification, will allow information systems to facilitate data-intensive studies and provide a proven route to meeting the two foregoing key requirements. Epilepsy ontology will be a structured terminology system that accommodates proposed and evolving ILAE classifications, the NIH/NINDS Common Data Elements, the ICD systems and explicitly specifies all known relationships between epilepsy concepts in a proper framework. This will aid evidence based epilepsy diagnosis, investigation, treatment and research for a diverse community of clinicians and researchers. Benefits range from systematization of electronic patient records to multi-modal data repositories for research and training manuals for those involved in epilepsy care. Given the complexity, heterogeneity and pace of research advances in the epilepsy domain, such an ontology must be collaboratively developed by key stakeholders in the epilepsy community and experts in knowledge engineering and computer science. PMID:22765502

  8. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.

  9. Comparison of two Classification methods (MLC and SVM) to extract land use and land cover in Johor Malaysia

    NASA Astrophysics Data System (ADS)

    Rokni Deilmai, B.; Ahmad, B. Bin; Zabihi, H.

    2014-06-01

    Mapping is essential for the analysis of the land use and land cover, which influence many environmental processes and properties. For the purpose of the creation of land cover maps, it is important to minimize error. These errors will propagate into later analyses based on these land cover maps. The reliability of land cover maps derived from remotely sensed data depends on an accurate classification. In this study, we have analyzed multispectral data using two different classifiers including Maximum Likelihood Classifier (MLC) and Support Vector Machine (SVM). To pursue this aim, Landsat Thematic Mapper data and identical field-based training sample datasets in Johor Malaysia used for each classification method, which results indicate in five land cover classes forest, oil palm, urban area, water, rubber. Classification results indicate that SVM was more accurate than MLC. With demonstrated capability to produce reliable cover results, the SVM methods should be especially useful for land cover classification.

  10. On the Discriminant Analysis in the 2-Populations Case

    NASA Astrophysics Data System (ADS)

    Rublík, František

    2008-01-01

    The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.

  11. A classification of errors in lay comprehension of medical documents.

    PubMed

    Keselman, Alla; Smith, Catherine Arnott

    2012-12-01

    Emphasis on participatory medicine requires that patients and consumers participate in tasks traditionally reserved for healthcare providers. This includes reading and comprehending medical documents, often but not necessarily in the context of interacting with Personal Health Records (PHRs). Research suggests that while giving patients access to medical documents has many benefits (e.g., improved patient-provider communication), lay people often have difficulty understanding medical information. Informatics can address the problem by developing tools that support comprehension; this requires in-depth understanding of the nature and causes of errors that lay people make when comprehending clinical documents. The objective of this study was to develop a classification scheme of comprehension errors, based on lay individuals' retellings of two documents containing clinical text: a description of a clinical trial and a typical office visit note. While not comprehensive, the scheme can serve as a foundation of further development of a taxonomy of patients' comprehension errors. Eighty participants, all healthy volunteers, read and retold two medical documents. A data-driven content analysis procedure was used to extract and classify retelling errors. The resulting hierarchical classification scheme contains nine categories and 23 subcategories. The most common error made by the participants involved incorrectly recalling brand names of medications. Other common errors included misunderstanding clinical concepts, misreporting the objective of a clinical research study and physician's findings during a patient's visit, and confusing and misspelling clinical terms. A combination of informatics support and health education is likely to improve the accuracy of lay comprehension of medical documents. Published by Elsevier Inc.

  12. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    PubMed

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  13. Review of medication errors that are new or likely to occur more frequently with electronic medication management systems.

    PubMed

    Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan

    2018-05-14

    Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.

  14. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  15. Classification of crops across heterogeneous agricultural landscape in Kenya using AisaEAGLE imaging spectroscopy data

    NASA Astrophysics Data System (ADS)

    Piiroinen, Rami; Heiskanen, Janne; Mõttus, Matti; Pellikka, Petri

    2015-07-01

    Land use practices are changing at a fast pace in the tropics. In sub-Saharan Africa forests, woodlands and bushlands are being transformed for agricultural use to produce food for the rapidly growing population. The objective of this study was to assess the prospects of mapping the common agricultural crops in highly heterogeneous study area in south-eastern Kenya using high spatial and spectral resolution AisaEAGLE imaging spectroscopy data. Minimum noise fraction transformation was used to pack the coherent information in smaller set of bands and the data was classified with support vector machine (SVM) algorithm. A total of 35 plant species were mapped in the field and seven most dominant ones were used as classification targets. Five of the targets were agricultural crops. The overall accuracy (OA) for the classification was 90.8%. To assess the possibility of excluding the remaining 28 plant species from the classification results, 10 different probability thresholds (PT) were tried with SVM. The impact of PT was assessed with validation polygons of all 35 mapped plant species. The results showed that while PT was increased more pixels were excluded from non-target polygons than from the polygons of the seven classification targets. This increased the OA and reduced salt-and-pepper effects in the classification results. Very high spatial resolution imagery and pixel-based classification approach worked well with small targets such as maize while there was mixing of classes on the sides of the tree crowns.

  16. Evaluation of spatial filtering on the accuracy of wheat area estimate

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.

    1982-01-01

    A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.

  17. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  18. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  19. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  20. Computer discrimination procedures applicable to aerial and ERTS multispectral data

    NASA Technical Reports Server (NTRS)

    Richardson, A. J.; Torline, R. J.; Allen, W. A.

    1970-01-01

    Two statistical models are compared in the classification of crops recorded on color aerial photographs. A theory of error ellipses is applied to the pattern recognition problem. An elliptical boundary condition classification model (EBC), useful for recognition of candidate patterns, evolves out of error ellipse theory. The EBC model is compared with the minimum distance to the mean (MDM) classification model in terms of pattern recognition ability. The pattern recognition results of both models are interpreted graphically using scatter diagrams to represent measurement space. Measurement space, for this report, is determined by optical density measurements collected from Kodak Ektachrome Infrared Aero Film 8443 (EIR). The EBC model is shown to be a significant improvement over the MDM model.

  1. Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations

    NASA Astrophysics Data System (ADS)

    Carter, Jeffrey R.; Simon, Wayne E.

    1990-08-01

    Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and

  2. Occurrence of phrenic nerve stimulation in cardiac resynchronization therapy patients: the role of left ventricular lead type and placement site.

    PubMed

    Biffi, Mauro; Exner, Derek V; Crossley, George H; Ramza, Brian; Coutu, Benoit; Tomassoni, Gery; Kranig, Wolfgang; Li, Shelby; Kristiansen, Nina; Voss, Frederik

    2013-01-01

    Unwanted phrenic nerve stimulation (PNS) has been reported in ∼1 in 4 patients undergoing left ventricular (LV) pacing. The occurrence of PNS over mid-term follow-up and the significance of PNS are less certain. Data from 1307 patients enrolled in pre-market studies of LV leads manufactured by Medtronic (models 4193 and 4195 unipolar, 4194, 4196, 4296, and 4396 bipolar) were pooled. Left ventricular lead location was recorded at implant using a common classification scheme. Phrenic nerve stimulation symptoms were either spontaneously reported or identified at scheduled follow-up visits. A PNS-related complication was defined as PNS resulting in invasive intervention or the termination of LV pacing. Average follow-up was 14.9 months (range 0.0-46.6). Phrenic nerve stimulation symptoms occurred in 169 patients (12.9%). Phrenic nerve stimulation-related complications occurred in 21 of 1307 patients (1.6%); 16 of 738 (2.2%) in the unipolar lead studies, and 5 of 569 (0.9%) in the bipolar lead studies (P = 0.08). Phrenic nerve stimulation was more frequent at middle-lateral/posterior, and apical LV sites (139/1010) vs. basal-posterior/lateral/anterior, and middle-anterior sites (20/297; P= 0.01). As compared with an anterior LV lead position, a lateral LV pacing site was associated with over a four-fold higher risk of PNS (P= 0.005) and an apical LV pacing site was associated with over six-fold higher risk of PNS (P= 0.001). Phrenic nerve stimulation occurred in 13% of patients undergoing LV lead placement and was more common at mid-lateral/posterior, and LV apical sites. Most cases (123/139; 88%) of PNS were mitigated via electrical reprogramming, without the need for invasive intervention.

  3. Systematic review and meta-analysis of left ventricular endocardial pacing in advanced heart failure: Clinically efficacious but at what cost?

    PubMed

    Graham, Adam J; Providenica, Rui; Honarbakhsh, Shohreh; Srinivasan, Neil; Sawhney, Vinit; Hunter, Ross; Lambiase, Pier

    2018-04-01

    Cardiac resynchronization using a left ventricular (LV) epicardial lead placed in the coronary sinus is now routinely used in the management of heart failure patients. LV endocardial pacing is an alternative when this is not feasible, with outcomes data sparse. To review the available evidence on the efficacy and safety of endocardial LV pacing via meta-analysis. EMBASE, MEDLINE, and COCHRANE databases with the search term "endocardial biventricular pacing" or "endocardial cardiac resynchronization" or "left ventricular endocardial" or "endocardial left ventricular." Comparisons of pre-and post-QRS width, LV ejection fraction (LVEF), and New York Heart Association (NYHA) functional classification was performed, and mean differences (and respective 95% confidence interval [CI]) applied as a measurement of treatment effect. Fifteen studies, including 362 patients, were selected. During a mean follow-up of 40 ± 24.5 months, death occurred in 72 patients (11 per 100 patient-years). Significant improvements in LVEF (mean difference 7.9%, 95% CI 5-10%, P < 0.0001; I 2  = 73%), QRS width (mean difference: -41% 95% -75 to -7%; P < 0.0001; I 2  = 94%), and NYHA class (mean difference: -1.06, 95% CI -1.2 to -0.9, P < 0.0001; I 2  = 60%), (all P < 0.0001) occurred. Stroke rate was 3.3-4.2 per 100 patient-years, which is higher than equivalent heart failure trial populations and recent meta-analysis that included small case series. LV endocardial lead implantation is a potentially efficacious alternative to CS lead placement, but preliminary data suggest a potentially higher risk of stroke during follow-up when compared to the expected incidence of stroke in similar cohorts of patients. © 2018 Wiley Periodicals, Inc.

  4. Differentiation of ventricular and supraventricular tachycardias based on the analysis of the first postpacing interval after sequential anti-tachycardia pacing in implantable cardioverter-defibrillator patients.

    PubMed

    Arenal, Angel; Ortiz, Mercedes; Peinado, Rafael; Merino, Jose L; Quesada, Aurelio; Atienza, Felipe; Alberola, Arcadio García; Ormaetxe, Jose; Castellanos, Eduardo; Rodriguez, Juan C; Pérez, Nicasio; García, Javier; Boluda, Luis; del Prado, Mario; Artés, Antonio

    2007-03-01

    Current discrimination algorithms do not completely avoid inappropriate tachycardia detection. This study analyzes the discrimination capability of the changes of the first postpacing interval (FPPI) after successive bursts of anti-tachycardia pacing (ATP) trains in implantable cardioverter-defibrillator (ICD)-recorded tachycardias. We included 50 ICD patients in this prospective study. We hypothesized that the FPPI variability (FPPIV) when comparing bursts with different numbers of beats would be shorter in ventricular tachycardias (VTs) compared with supraventricular tachycardias (SVTs). The ATP (5-10 pulses, 91% of tachycardia cycle length) was programmed for tachycardias >240 ms. Anti-tachycardia pacing was delivered during 37 sinus tachycardias (STs) in an exercise test, 96 induced VTs in an electrophysiological study, and 198 spontaneous episodes (144 VTs and 54 SVTs). The FPPI remained stable after all ATP bursts in VT but changed continuously in SVT; when comparing bursts of 5 and 10 pulses, the FPPIV was shorter in VT (34 +/- 65 vs.138 +/- 69, P<.0001, in all T and 12 +/- 20 vs. 138 +/- 69, P<.0001, in T>or=320 ms) than in SVT. In T>or=320 ms an FPPIV

  5. Calibration of remotely sensed proportion or area estimates for misclassification error

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts

    1992-01-01

    Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...

  6. Mapping gully-affected areas in the region of Taroudannt, Morocco based on Object-Based Image Analysis (OBIA)

    NASA Astrophysics Data System (ADS)

    d'Oleire-Oltmanns, Sebastian; Marzolff, Irene; Tiede, Dirk; Blaschke, Thomas

    2015-04-01

    The need for area-wide landform mapping approaches, especially in terms of land degradation, can be ascribed to the fact that within area-wide landform mapping approaches, the (spatial) context of erosional landforms is considered by providing additional information on the physiography neighboring the distinct landform. This study presents an approach for the detection of gully-affected areas by applying object-based image analysis in the region of Taroudannt, Morocco, which is highly affected by gully erosion while simultaneously representing a major region of agro-industry with a high demand of arable land. Various sensors provide readily available high-resolution optical satellite data with a much better temporal resolution than 3D terrain data which lead to the development of an area-wide mapping approach to extract gully-affected areas using only optical satellite imagery. The classification rule-set was developed with a clear focus on virtual spatial independence within the software environment of eCognition Developer. This allows the incorporation of knowledge about the target objects under investigation. Only optical QuickBird-2 satellite data and freely-available OpenStreetMap (OSM) vector data were used as input data. The OSM vector data were incorporated in order to mask out plantations and residential areas. Optical input data are more readily available for a broad range of users compared to terrain data, which is considered to be a major advantage. The methodology additionally incorporates expert knowledge and freely-available vector data in a cyclic object-based image analysis approach. This connects the two fields of geomorphology and remote sensing. The classification results allow conclusions on the current distribution of gullies. The results of the classification were checked against manually delineated reference data incorporating expert knowledge based on several field campaigns in the area, resulting in an overall classification accuracy of 62%. The error of omission accounts for 38% and the error of commission for 16%, respectively. Additionally, a manual assessment was carried out to assess the quality of the applied classification algorithm. The limited error of omission contributes with 23% to the overall error of omission and the limited error of commission contributes with 98% to the overall error of commission. This assessment improves the results and confirms the high quality of the developed approach for area-wide mapping of gully-affected areas in larger regions. In the field of landform mapping, the overall quality of the classification results is often assessed with more than one method to incorporate all aspects adequately.

  7. Agreement processing and attraction errors in aging: evidence from subject-verb agreement in German.

    PubMed

    Reifegerste, Jana; Hauer, Franziska; Felser, Claudia

    2017-11-01

    Effects of aging on lexical processing are well attested, but the picture is less clear for grammatical processing. Where age differences emerge, these are usually ascribed to working-memory (WM) decline. Previous studies on the influence of WM on agreement computation have yielded inconclusive results, and work on aging and subject-verb agreement processing is lacking. In two experiments (Experiment 1: timed grammaticality judgment, Experiment 2: self-paced reading + WM test), we investigated older (OA) and younger (YA) adults' susceptibility to agreement attraction errors. We found longer reading latencies and judgment reaction times (RTs) for OAs. Further, OAs, particularly those with low WM scores, were more accepting of sentences with attraction errors than YAs. OAs showed longer reading latencies for ungrammatical sentences, again modulated by WM, than YAs. Our results indicate that OAs have greater difficulty blocking intervening nouns from interfering with the computation of agreement dependencies. WM can modulate this effect.

  8. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  9. Classification accuracy on the family planning participation status using kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Kurniawan, Dian; Suparti; Sugito

    2018-05-01

    Population growth in Indonesia has increased every year. According to the population census conducted by the Central Bureau of Statistics (BPS) in 2010, the population of Indonesia has reached 237.6 million people. Therefore, to control the population growth rate, the government hold Family Planning or Keluarga Berencana (KB) program for couples of childbearing age. The purpose of this program is to improve the health of mothers and children in order to manifest prosperous society by controlling births while ensuring control of population growth. The data used in this study is the updated family data of Semarang city in 2016 that conducted by National Family Planning Coordinating Board (BKKBN). From these data, classifiers with kernel discriminant analysis will be obtained, and also classification accuracy will be obtained from that method. The result of the analysis showed that normal kernel discriminant analysis gives 71.05 % classification accuracy with 28.95 % classification error. Whereas triweight kernel discriminant analysis gives 73.68 % classification accuracy with 26.32 % classification error. Using triweight kernel discriminant for data preprocessing of family planning participation of childbearing age couples in Semarang City of 2016 can be stated better than with normal kernel discriminant.

  10. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  11. Stochastic Modulations of the Pace and Patterns of Ageing: Impacts on Quasi-Stochastic Distributions of Multiple Geriatric Pathologies

    PubMed Central

    Martin, George M.

    2011-01-01

    All phenotypes result from interactions between Nature, Nurture and Chance. The constitutional genome is clearly the dominant factor in explaining the striking differences in the pace and patterns of ageing among species. We are now in a position to reveal salient features underlying these differential modulations, which are likely to be dominated by regulatory domains. By contrast, I shall argue that stochastic events are the major players underlying the surprisingly large intra-specific variations in lifespan and healthspan. I shall review well established as well as more speculative categories of chance events – somatic mutations, protein synthesis error catastrophe and variegations of gene expression (epigenetic drift), with special emphasis upon the latter. I shall argue that stochastic drifts in variegated gene expression are the major contributors to intra-specific differences in the pace and patterns of ageing within members of the same species. They may be responsible for the quasi-stochastic distributions of major types of geriatric pathologies, including the “big three” of Alzheimer's disease, atherosclerosis and, via the induction of hyperplasis, cancer. They may be responsible for altered stoichiometries of heteromultimeric mitochondrial complexes, potentially leading to such disorders as sarcopenia, nonischemic cardiomyopathy and Parkinson's disease. PMID:21963385

  12. Using Gaussian mixture models to detect and classify dolphin whistles and pulses.

    PubMed

    Peso Parada, Pablo; Cardenal-López, Antonio

    2014-06-01

    In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.

  13. Unbiased Taxonomic Annotation of Metagenomic Samples

    PubMed Central

    Fosso, Bruno; Pesole, Graziano; Rosselló, Francesc

    2018-01-01

    Abstract The classification of reads from a metagenomic sample using a reference taxonomy is usually based on first mapping the reads to the reference sequences and then classifying each read at a node under the lowest common ancestor of the candidate sequences in the reference taxonomy with the least classification error. However, this taxonomic annotation can be biased by an imbalanced taxonomy and also by the presence of multiple nodes in the taxonomy with the least classification error for a given read. In this article, we show that the Rand index is a better indicator of classification error than the often used area under the receiver operating characteristic (ROC) curve and F-measure for both balanced and imbalanced reference taxonomies, and we also address the second source of bias by reducing the taxonomic annotation problem for a whole metagenomic sample to a set cover problem, for which a logarithmic approximation can be obtained in linear time and an exact solution can be obtained by integer linear programming. Experimental results with a proof-of-concept implementation of the set cover approach to taxonomic annotation in a next release of the TANGO software show that the set cover approach further reduces ambiguity in the taxonomic annotation obtained with TANGO without distorting the relative abundance profile of the metagenomic sample. PMID:29028181

  14. Global land cover mapping: a review and uncertainty analysis

    USGS Publications Warehouse

    Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu

    2014-01-01

    Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.

  15. Defining and classifying medical error: lessons for patient safety reporting systems.

    PubMed

    Tamuz, M; Thomas, E J; Franchois, K E

    2004-02-01

    It is important for healthcare providers to report safety related events, but little attention has been paid to how the definition and classification of events affects a hospital's ability to learn from its experience. To examine how the definition and classification of safety related events influences key organizational routines for gathering information, allocating incentives, and analyzing event reporting data. In semi-structured interviews, professional staff and administrators in a tertiary care teaching hospital and its pharmacy were asked to describe the existing programs designed to monitor medication safety, including the reporting systems. With a focus primarily on the pharmacy staff, interviews were audio recorded, transcribed, and analyzed using qualitative research methods. Eighty six interviews were conducted, including 36 in the hospital pharmacy. Examples are presented which show that: (1) the definition of an event could lead to under-reporting; (2) the classification of a medication error into alternative categories can influence the perceived incentives and disincentives for incident reporting; (3) event classification can enhance or impede organizational routines for data analysis and learning; and (4) routines that promote organizational learning within the pharmacy can reduce the flow of medication error data to the hospital. These findings from one hospital raise important practical and research questions about how reporting systems are influenced by the definition and classification of safety related events. By understanding more clearly how hospitals define and classify their experience, we may improve our capacity to learn and ultimately improve patient safety.

  16. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  17. Human factors analysis and classification system-HFACS.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...

  18. Effect of T-2 Toxin, Fasting, and 2-Methyl-thiazolidine-4-carboxylate, a Glutathione Prodrug, on Hepatic Glutathione Levels1,2

    DTIC Science & Technology

    1986-11-14

    Iftnvi fnr fard mirn: O Fo, M73 3 r0tiON OF I MOV 6s IS OBSOLETE SECURITY CLASSIFICATION OF THIS PACE (When Does Entoeed) , I I I I I 4.5± 0.39...Glende, 1973). An important cellular defense against peroxida- tive damage is the presence of glutathione and its use as an enzyme substrate or...cofactor. Even though intracellular glutathione concentration is in the millimolar range (Kosower and Kosower, 1978), there are conditions which lead to

  19. IMPACTS OF PATCH SIZE AND LANDSCAPE HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    EPA Science Inventory

    Impacts of Patch Size and Landscape Heterogeneity on Thematic Image Classification Accuracy.
    Currently, most thematic accuracy assessments of classified remotely sensed images oily account for errors between the various classes employed, at particular pixels of interest, thu...

  20. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  1. High-density force myography: A possible alternative for upper-limb prosthetic control.

    PubMed

    Radmand, Ashkan; Scheme, Erik; Englehart, Kevin

    2016-01-01

    Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.

  2. Improved wetland remote sensing in Yellowstone National Park using classification trees to combine TM imagery and ancillary environmental data

    USGS Publications Warehouse

    Wright, C.; Gallant, Alisa L.

    2007-01-01

    The U.S. Fish and Wildlife Service uses the term palustrine wetland to describe vegetated wetlands traditionally identified as marsh, bog, fen, swamp, or wet meadow. Landsat TM imagery was combined with image texture and ancillary environmental data to model probabilities of palustrine wetland occurrence in Yellowstone National Park using classification trees. Model training and test locations were identified from National Wetlands Inventory maps, and classification trees were built for seven years spanning a range of annual precipitation. At a coarse level, palustrine wetland was separated from upland. At a finer level, five palustrine wetland types were discriminated: aquatic bed (PAB), emergent (PEM), forested (PFO), scrub–shrub (PSS), and unconsolidated shore (PUS). TM-derived variables alone were relatively accurate at separating wetland from upland, but model error rates dropped incrementally as image texture, DEM-derived terrain variables, and other ancillary GIS layers were added. For classification trees making use of all available predictors, average overall test error rates were 7.8% for palustrine wetland/upland models and 17.0% for palustrine wetland type models, with consistent accuracies across years. However, models were prone to wetland over-prediction. While the predominant PEM class was classified with omission and commission error rates less than 14%, we had difficulty identifying the PAB and PSS classes. Ancillary vegetation information greatly improved PSS classification and moderately improved PFO discrimination. Association with geothermal areas distinguished PUS wetlands. Wetland over-prediction was exacerbated by class imbalance in likely combination with spatial and spectral limitations of the TM sensor. Wetland probability surfaces may be more informative than hard classification, and appear to respond to climate-driven wetland variability. The developed method is portable, relatively easy to implement, and should be applicable in other settings and over larger extents.

  3. Common component classification: what can we learn from machine learning?

    PubMed

    Anderson, Ariana; Labus, Jennifer S; Vianna, Eduardo P; Mayer, Emeran A; Cohen, Mark S

    2011-05-15

    Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. J-Plus: Morphological Classification Of Compact And Extended Sources By Pdf Analysis

    NASA Astrophysics Data System (ADS)

    López-Sanjuan, C.; Vázquez-Ramió, H.; Varela, J.; Spinoso, D.; Cristóbal-Hornillos, D.; Viironen, K.; Muniesa, D.; J-PLUS Collaboration

    2017-10-01

    We present a morphological classification of J-PLUS EDR sources into compact (i.e. stars) and extended (i.e. galaxies). Such classification is based on the Bayesian modelling of the concentration distribution, including observational errors and magnitude + sky position priors. We provide the star / galaxy probability of each source computed from the gri images. The comparison with the SDSS number counts support our classification up to r 21. The 31.7 deg² analised comprises 150k stars and 101k galaxies.

  5. Distinct timing mechanisms produce discrete and continuous movements.

    PubMed

    Huys, Raoul; Studenka, Breanna E; Rheaume, Nicole L; Zelaznik, Howard N; Jirsa, Viktor K

    2008-04-25

    The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems) to accomplish varying behavioral functions such as speed constraints.

  6. Identification of factors which affect the tendency towards and attitudes of emergency unit nurses to make medical errors.

    PubMed

    Kiymaz, Dilek; Koç, Zeliha

    2018-03-01

    To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.

  7. Halftoning Algorithms and Systems.

    DTIC Science & Technology

    1996-08-01

    TERMS 15. NUMBER IF PAGESi. Halftoning algorithms; error diffusions ; color printing; topographic maps 16. PRICE CODE 17. SECURITY CLASSIFICATION 18...graylevels for each screen level. In the case of error diffusion algorithms, the calibration procedure using the new centering concept manifests itself as a...Novel Centering Concept for Overlapping Correction Paper / Transparency (Patent Applied 5/94)I * Applications To Error Diffusion * To Dithering (IS&T

  8. Simulation techniques for estimating error in the classification of normal patterns

    NASA Technical Reports Server (NTRS)

    Whitsitt, S. J.; Landgrebe, D. A.

    1974-01-01

    Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.

  9. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  10. The Human Factors Analysis and Classification System : HFACS : final report.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...

  11. A Confidence Paradigm for Classification Systems

    DTIC Science & Technology

    2008-09-01

    methodology to determine how much confi- dence one should have in a classifier output. This research proposes a framework to determine the level of...theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or...operating point. An algorithm is developed that minimizes a “confidence” measure called Binned Error in the Posterior ( BEP ). Then, we prove that training a

  12. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose

    PubMed Central

    Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-01-01

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910

  13. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.

    PubMed

    Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-09-12

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.

  14. A mini review on hydrogels classification and recent developments in miscellaneous applications.

    PubMed

    Varaprasad, Kokkarachedu; Raghavendra, Gownolla Malegowd; Jayaramudu, Tippabattini; Yallapu, Murali Mohan; Sadiku, Rotimi

    2017-10-01

    Hydrogels are composed of three-dimensional smart and/or hungry networks, which do not dissolve in water but swell considerably in an aqueous medium, demonstrating an extraordinary ability to absorb water into the reticulated structure. Such inherent feature is a subject of considerable scientific research interest which leads to a dominating path in extending their potential in hi-tech applications. Over the past decades, significant progress has been made in the field of hydrogels. Further, explorations are continuously being made in all directions at an accelerated pace for their extensive usage. In view of this, the present review discusses the subject on the miscellaneous hydrogels with regard to their raw materials, methods of fabrication and applications. In addition, this article summarizes the classification of hydrogels, based on their cross-linking and physical states. Lately, a brief outlook on the future prospects of hydrogels is also presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Predicting Active Users' Personality Based on Micro-Blogging Behaviors

    PubMed Central

    Hao, Bibo; Guan, Zengda; Zhu, Tingshao

    2014-01-01

    Because of its richness and availability, micro-blogging has become an ideal platform for conducting psychological research. In this paper, we proposed to predict active users' personality traits through micro-blogging behaviors. 547 Chinese active users of micro-blogging participated in this study. Their personality traits were measured by the Big Five Inventory, and digital records of micro-blogging behaviors were collected via web crawlers. After extracting 845 micro-blogging behavioral features, we first trained classification models utilizing Support Vector Machine (SVM), differentiating participants with high and low scores on each dimension of the Big Five Inventory. The classification accuracy ranged from 84% to 92%. We also built regression models utilizing PaceRegression methods, predicting participants' scores on each dimension of the Big Five Inventory. The Pearson correlation coefficients between predicted scores and actual scores ranged from 0.48 to 0.54. Results indicated that active users' personality traits could be predicted by micro-blogging behaviors. PMID:24465462

  16. Differences in chewing sounds of dry-crisp snacks by multivariate data analysis

    NASA Astrophysics Data System (ADS)

    De Belie, N.; Sivertsvik, M.; De Baerdemaeker, J.

    2003-09-01

    Chewing sounds of different types of dry-crisp snacks (two types of potato chips, prawn crackers, cornflakes and low calorie snacks from extruded starch) were analysed to assess differences in sound emission patterns. The emitted sounds were recorded by a microphone placed over the ear canal. The first bite and the first subsequent chew were selected from the time signal and a fast Fourier transformation provided the power spectra. Different multivariate analysis techniques were used for classification of the snack groups. This included principal component analysis (PCA) and unfold partial least-squares (PLS) algorithms, as well as multi-way techniques such as three-way PLS, three-way PCA (Tucker3), and parallel factor analysis (PARAFAC) on the first bite and subsequent chew. The models were evaluated by calculating the classification errors and the root mean square error of prediction (RMSEP) for independent validation sets. It appeared that the logarithm of the power spectra obtained from the chewing sounds could be used successfully to distinguish the different snack groups. When different chewers were used, recalibration of the models was necessary. Multi-way models distinguished better between chewing sounds of different snack groups than PCA on bite or chew separately and than unfold PLS. From all three-way models applied, N-PLS with three components showed the best classification capabilities, resulting in classification errors of 14-18%. The major amount of incorrect classifications was due to one type of potato chips that had a very irregular shape, resulting in a wide variation of the emitted sounds.

  17. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  18. Consequences of land-cover misclassification in models of impervious surface

    USGS Publications Warehouse

    McMahon, G.

    2007-01-01

    Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.

  19. Improved classification accuracy by feature extraction using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.

    2003-05-01

    A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.

  20. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  1. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.

  2. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  3. Simulated rRNA/DNA Ratios Show Potential To Misclassify Active Populations as Dormant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steven, Blaire; Hesse, Cedar; Soghigian, John

    The use of rRNA/DNA ratios derived from surveys of rRNA sequences in RNA and DNA extracts is an appealing but poorly validated approach to infer the activity status of environmental microbes. To improve the interpretation of rRNA/DNA ratios, we performed simulations to investigate the effects of community structure, rRNA amplification, and sampling depth on the accuracy of rRNA/DNA ratios in classifying bacterial populations as “active” or “dormant.” Community structure was an insignificant factor. In contrast, the extent of rRNA amplification that occurs as cells transition from dormant to growing had a significant effect (P < 0.0001) on classification accuracy, withmore » misclassification errors ranging from 16 to 28%, depending on the rRNA amplification model. The error rate increased to 47% when communities included a mixture of rRNA amplification models, but most of the inflated error was false negatives (i.e., active populations misclassified as dormant). Sampling depth also affected error rates (P < 0.001). Inadequate sampling depth produced various artifacts that are characteristic of rRNA/DNA ratios generated from real communities. These data show important constraints on the use of rRNA/DNA ratios to infer activity status. Whereas classification of populations as active based on rRNA/DNA ratios appears generally valid, classification of populations as dormant is potentially far less accurate.« less

  4. Simulated rRNA/DNA Ratios Show Potential To Misclassify Active Populations as Dormant

    DOE PAGES

    Steven, Blaire; Hesse, Cedar; Soghigian, John; ...

    2017-03-31

    The use of rRNA/DNA ratios derived from surveys of rRNA sequences in RNA and DNA extracts is an appealing but poorly validated approach to infer the activity status of environmental microbes. To improve the interpretation of rRNA/DNA ratios, we performed simulations to investigate the effects of community structure, rRNA amplification, and sampling depth on the accuracy of rRNA/DNA ratios in classifying bacterial populations as “active” or “dormant.” Community structure was an insignificant factor. In contrast, the extent of rRNA amplification that occurs as cells transition from dormant to growing had a significant effect (P < 0.0001) on classification accuracy, withmore » misclassification errors ranging from 16 to 28%, depending on the rRNA amplification model. The error rate increased to 47% when communities included a mixture of rRNA amplification models, but most of the inflated error was false negatives (i.e., active populations misclassified as dormant). Sampling depth also affected error rates (P < 0.001). Inadequate sampling depth produced various artifacts that are characteristic of rRNA/DNA ratios generated from real communities. These data show important constraints on the use of rRNA/DNA ratios to infer activity status. Whereas classification of populations as active based on rRNA/DNA ratios appears generally valid, classification of populations as dormant is potentially far less accurate.« less

  5. Regulation of IAP (Inhibitor of Apoptosis) Gene Expression by the p53 Tumor Suppressor Protein

    DTIC Science & Technology

    2005-05-01

    adenovirus, gene therapy, polymorphism, 31 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20...averaged results of three inde- pendent experiments, with standard error. Right panel: Level of p53 in infected cells using the antibody Ab-6 (Calbiochem...with highly purified mitochondria as described in (2). The arrow marks oligomerized BAK. The right _ -. panel depicts the purity of BMH CrosIinked Mito

  6. An Analysis of U.S. Army Fratricide Incidents during the Global War on Terror (11 September 2001 to 31 March 2008)

    DTIC Science & Technology

    2010-03-15

    Swiss cheese model of human error causation. ................................................................... 3  2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error

  7. Absent without leave; a neuroenergetic theory of mind wandering

    PubMed Central

    Killeen, Peter R.

    2013-01-01

    Absent minded people are not under the control of task-relevant stimuli. According to the Neuroenergetics Theory of attention (NeT), this lack of control is often due to fatigue of the relevant processing units in the brain caused by insufficient resupply of the neuron's preferred fuel, lactate, from nearby astrocytes. A simple drift model of information processing accounts for response-time statistics in a paradigm often used to study inattention, the Sustained Attention to Response Task (SART). It is suggested that errors and slowing in this fast-paced, response-engaging task may have little to due with inattention. Slower-paced and less response-demanding tasks give greater license for inattention—aka absent-mindedness, mind-wandering. The basic NeT is therefore extended with an ancillary model of attentional drift and recapture. This Markov model, called NEMA, assumes probability λ of lapses of attention from 1 s to the next, and probability α of drifting back to the attentional state. These parameters measure the strength of attraction back to the task (α), or away to competing mental states or action patterns (λ); their proportion determines the probability of the individual being inattentive at any point in time over the long run. Their values are affected by the fatigue of the brain units they traffic between. The deployment of the model is demonstrated with a data set involving paced responding. PMID:23847559

  8. Object-Based Land Use Classification of Agricultural Land by Coupling Multi-Temporal Spectral Characteristics and Phenological Events in Germany

    NASA Astrophysics Data System (ADS)

    Knoefel, Patrick; Loew, Fabian; Conrad, Christopher

    2015-04-01

    Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty. The results indicate that uncertainty estimates provide a valuable addition to traditional accuracy assessments and helps the user to allocate error in crop maps.

  9. A new classification of glaucomas

    PubMed Central

    Bordeianu, Constantin-Dan

    2014-01-01

    Purpose To suggest a new glaucoma classification that is pathogenic, etiologic, and clinical. Methods After discussing the logical pathway used in criteria selection, the paper presents the new classification and compares it with the classification currently in use, that is, the one issued by the European Glaucoma Society in 2008. Results The paper proves that the new classification is clear (being based on a coherent and consistently followed set of criteria), is comprehensive (framing all forms of glaucoma), and helps in understanding the sickness understanding (in that it uses a logical framing system). The great advantage is that it facilitates therapeutic decision making in that it offers direct therapeutic suggestions and avoids errors leading to disasters. Moreover, the scheme remains open to any new development. Conclusion The suggested classification is a pathogenic, etiologic, and clinical classification that fulfills the conditions of an ideal classification. The suggested classification is the first classification in which the main criterion is consistently used for the first 5 to 7 crossings until its differentiation capabilities are exhausted. Then, secondary criteria (etiologic and clinical) pick up the relay until each form finds its logical place in the scheme. In order to avoid unclear aspects, the genetic criterion is no longer used, being replaced by age, one of the clinical criteria. The suggested classification brings only benefits to all categories of ophthalmologists: the beginners will have a tool to better understand the sickness and to ease their decision making, whereas the experienced doctors will have their practice simplified. For all doctors, errors leading to therapeutic disasters will be less likely to happen. Finally, researchers will have the object of their work gathered in the group of glaucoma with unknown or uncertain pathogenesis, whereas the results of their work will easily find a logical place in the scheme, as the suggested classification remains open to any new development. PMID:25246759

  10. The Development of Performance-Based Auditory Aviation Classification Standards in the U.S. Navy,

    DTIC Science & Technology

    1987-12-01

    Gerontology, Vol. 24(2), pp. 189-192, 1969. 10. Palva, A. and Jokinen, K., ’The Role of the Binaural Test in Filtered Speech Audiometry." Acta Oto...BEAD BEAT BEAN REEL HEEL EEL PAVE PALE PAY WIG RIG FIG GALE MALE TALE PAGE PANE PACE PIG BIG DIG PALE SALE BALE DID DIN DIP SAP SAG SAD SIN WIN...SEEN SEED SEEK CAME GAME SAME NEAT BEAT SEAT SEEM SEETHE SEEP PAD PASS PATH PARK MARK HARK SIP RIP TIP PACK PAN PAT DARK LARK BARK LIP HIP DIP LED

  11. Psychiatric Disorders: Diagnosis to Therapy

    PubMed Central

    Krystal, John H.; State, Matthew W.

    2014-01-01

    Recent findings in a range of scientific disciplines are challenging the conventional wisdom regarding the etiology, classification and treatment of psychiatric disorders. This review focuses on the current state of the psychiatric diagnostic nosology and recent progress in three areas: genomics, neuroimaging, and therapeutics development. The accelerating pace of novel and unexpected findings is transforming the understanding of mental illness and represents a hopeful sign that the approaches and models that have sustained the field for the past 40 years are yielding to a flood of new data and presaging the emergence of a new and more powerful scientific paradigm. PMID:24679536

  12. Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing.

    PubMed

    Hargreaves, Adam D; Mulley, John F

    2015-01-01

    Portable DNA sequencers such as the Oxford Nanopore MinION device have the potential to be truly disruptive technologies, facilitating new approaches and analyses and, in some cases, taking sequencing out of the lab and into the field. However, the capabilities of these technologies are still being revealed. Here we show that single-molecule cDNA sequencing using the MinION accurately characterises venom toxin-encoding genes in the painted saw-scaled viper, Echis coloratus. We find the raw sequencing error rate to be around 12%, improved to 0-2% with hybrid error correction and 3% with de novo error correction. Our corrected data provides full coding sequences and 5' and 3' UTRs for 29 of 33 candidate venom toxins detected, far superior to Illumina data (13/40 complete) and Sanger-based ESTs (15/29). We suggest that, should the current pace of improvement continue, the MinION will become the default approach for cDNA sequencing in a variety of species.

  13. Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing

    PubMed Central

    Hargreaves, Adam D.

    2015-01-01

    Portable DNA sequencers such as the Oxford Nanopore MinION device have the potential to be truly disruptive technologies, facilitating new approaches and analyses and, in some cases, taking sequencing out of the lab and into the field. However, the capabilities of these technologies are still being revealed. Here we show that single-molecule cDNA sequencing using the MinION accurately characterises venom toxin-encoding genes in the painted saw-scaled viper, Echis coloratus. We find the raw sequencing error rate to be around 12%, improved to 0–2% with hybrid error correction and 3% with de novo error correction. Our corrected data provides full coding sequences and 5′ and 3′ UTRs for 29 of 33 candidate venom toxins detected, far superior to Illumina data (13/40 complete) and Sanger-based ESTs (15/29). We suggest that, should the current pace of improvement continue, the MinION will become the default approach for cDNA sequencing in a variety of species. PMID:26623194

  14. Classification of burn wounds using support vector machines

    NASA Astrophysics Data System (ADS)

    Acha, Begona; Serrano, Carmen; Palencia, Sergio; Murillo, Juan Jose

    2004-05-01

    The purpose of this work is to improve a previous method developed by the authors for the classification of burn wounds into their depths. The inputs of the system are color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. Our previous work consisted in segmenting the burn wound from the rest of the image and classifying the burn into its depth. In this paper we focus on the classification problem only. We already proposed to use a Fuzzy-ARTMAP neural network (NN). However, we may take advantage of new powerful classification tools such as Support Vector Machines (SVM). We apply the five-folded cross validation scheme to divide the database into training and validating sets. Then, we apply a feature selection method for each classifier, which will give us the set of features that yields the smallest classification error for each classifier. Features used to classify are first-order statistical parameters extracted from the L*, u* and v* color components of the image. The feature selection algorithms used are the Sequential Forward Selection (SFS) and the Sequential Backward Selection (SBS) methods. As data of the problem faced here are not linearly separable, the SVM was trained using some different kernels. The validating process shows that the SVM method, when using a Gaussian kernel of variance 1, outperforms classification results obtained with the rest of the classifiers, yielding an error classification rate of 0.7% whereas the Fuzzy-ARTMAP NN attained 1.6 %.

  15. Comparative assessment of LANDSAT-D MSS and TM data quality for mapping applications in the Southeast

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rectifications of multispectral scanner and thematic mapper data sets for full and subscene areas, analyses of planimetric errors, assessments of the number and distribution of ground control points required to minimize errors, and factors contributing to error residual are examined. Other investigations include the generation of three dimensional terrain models and the effects of spatial resolution on digital classification accuracies.

  16. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  17. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  18. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    NASA Astrophysics Data System (ADS)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  19. The generalization ability of online SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  20. Aspen, climate, and sudden decline in western USA

    Treesearch

    Gerald E. Rehfeldt; Dennis E. Ferguson; Nicholas L. Crookston

    2009-01-01

    A bioclimate model predicting the presence or absence of aspen, Populus tremuloides, in western USA from climate variables was developed by using the Random Forests classification tree on Forest Inventory data from about 118,000 permanent sample plots. A reasonably parsimonious model used eight predictors to describe aspen's climate profile. Classification errors...

  1. Multi-template tensor-based morphometry: Application to analysis of Alzheimer's disease

    PubMed Central

    Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka

    2012-01-01

    In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and comparedtothe conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1%for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. PMID:21419228

  2. Predicting alpine headwater stream intermittency: a case study in the northern Rocky Mountains

    USGS Publications Warehouse

    Sando, Thomas R.; Blasch, Kyle W.

    2015-01-01

    This investigation used climatic, geological, and environmental data coupled with observational stream intermittency data to predict alpine headwater stream intermittency. Prediction was made using a random forest classification model. Results showed that the most important variables in the prediction model were snowpack persistence, represented by average snow extent from March through July, mean annual mean monthly minimum temperature, and surface geology types. For stream catchments with intermittent headwater streams, snowpack, on average, persisted until early June, whereas for stream catchments with perennial headwater streams, snowpack, on average, persisted until early July. Additionally, on average, stream catchments with intermittent headwater streams were about 0.7 °C warmer than stream catchments with perennial headwater streams. Finally, headwater stream catchments primarily underlain by coarse, permeable sediment are significantly more likely to have intermittent headwater streams than those primarily underlain by impermeable bedrock. Comparison of the predicted streamflow classification with observed stream status indicated a four percent classification error for first-order streams and a 21 percent classification error for all stream orders in the study area.

  3. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  4. Software platform for managing the classification of error- related potentials of observers

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  5. Satellite inventory of Minnesota forest resources

    NASA Technical Reports Server (NTRS)

    Bauer, Marvin E.; Burk, Thomas E.; Ek, Alan R.; Coppin, Pol R.; Lime, Stephen D.; Walsh, Terese A.; Walters, David K.; Befort, William; Heinzen, David F.

    1993-01-01

    The methods and results of using Landsat Thematic Mapper (TM) data to classify and estimate the acreage of forest covertypes in northeastern Minnesota are described. Portions of six TM scenes covering five counties with a total area of 14,679 square miles were classified into six forest and five nonforest classes. The approach involved the integration of cluster sampling, image processing, and estimation. Using cluster sampling, 343 plots, each 88 acres in size, were photo interpreted and field mapped as a source of reference data for classifier training and calibration of the TM data classifications. Classification accuracies of up to 75 percent were achieved; most misclassification was between similar or related classes. An inverse method of calibration, based on the error rates obtained from the classifications of the cluster plots, was used to adjust the classification class proportions for classification errors. The resulting area estimates for total forest land in the five-county area were within 3 percent of the estimate made independently by the USDA Forest Service. Area estimates for conifer and hardwood forest types were within 0.8 and 6.0 percent respectively, of the Forest Service estimates. A trial of a second method of estimating the same classes as the Forest Service resulted in standard errors of 0.002 to 0.015. A study of the use of multidate TM data for change detection showed that forest canopy depletion, canopy increment, and no change could be identified with greater than 90 percent accuracy. The project results have been the basis for the Minnesota Department of Natural Resources and the Forest Service to define and begin to implement an annual system of forest inventory which utilizes Landsat TM data to detect changes in forest cover.

  6. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

    PubMed Central

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano

    2015-01-01

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392

  7. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    PubMed

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  8. Multi-factorial analysis of class prediction error: estimating optimal number of biomarkers for various classification rules.

    PubMed

    Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter

    2010-12-01

    Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).

  9. Predictive classification of self-paced upper-limb analytical movements with EEG.

    PubMed

    Ibáñez, Jaime; Serrano, J I; del Castillo, M D; Minguez, J; Pons, J L

    2015-11-01

    The extent to which the electroencephalographic activity allows the characterization of movements with the upper limb is an open question. This paper describes the design and validation of a classifier of upper-limb analytical movements based on electroencephalographic activity extracted from intervals preceding self-initiated movement tasks. Features selected for the classification are subject specific and associated with the movement tasks. Further tests are performed to reject the hypothesis that other information different from the task-related cortical activity is being used by the classifiers. Six healthy subjects were measured performing self-initiated upper-limb analytical movements. A Bayesian classifier was used to classify among seven different kinds of movements. Features considered covered the alpha and beta bands. A genetic algorithm was used to optimally select a subset of features for the classification. An average accuracy of 62.9 ± 7.5% was reached, which was above the baseline level observed with the proposed methodology (30.2 ± 4.3%). The study shows how the electroencephalography carries information about the type of analytical movement performed with the upper limb and how it can be decoded before the movement begins. In neurorehabilitation environments, this information could be used for monitoring and assisting purposes.

  10. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography

    PubMed Central

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008

  11. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography.

    PubMed

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.

  12. Effects of stress typicality during speeded grammatical classification.

    PubMed

    Arciuli, Joanne; Cupples, Linda

    2003-01-01

    The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.

  13. Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging.

    PubMed

    Hagedorn, Christina; Proctor, Michael; Goldstein, Louis; Wilson, Stephen M; Miller, Bruce; Gorno-Tempini, Maria Luisa; Narayanan, Shrikanth S

    2017-04-14

    Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.

  14. Automatic classification of diseases from free-text death certificates for real-time surveillance.

    PubMed

    Koopman, Bevan; Karimi, Sarvnaz; Nguyen, Anthony; McGuire, Rhydwyn; Muscatello, David; Kemp, Madonna; Truran, Donna; Zhang, Ming; Thackway, Sarah

    2015-07-15

    Death certificates provide an invaluable source for mortality statistics which can be used for surveillance and early warnings of increases in disease activity and to support the development and monitoring of prevention or response strategies. However, their value can be realised only if accurate, quantitative data can be extracted from death certificates, an aim hampered by both the volume and variable nature of certificates written in natural language. This study aims to develop a set of machine learning and rule-based methods to automatically classify death certificates according to four high impact diseases of interest: diabetes, influenza, pneumonia and HIV. Two classification methods are presented: i) a machine learning approach, where detailed features (terms, term n-grams and SNOMED CT concepts) are extracted from death certificates and used to train a set of supervised machine learning models (Support Vector Machines); and ii) a set of keyword-matching rules. These methods were used to identify the presence of diabetes, influenza, pneumonia and HIV in a death certificate. An empirical evaluation was conducted using 340,142 death certificates, divided between training and test sets, covering deaths from 2000-2007 in New South Wales, Australia. Precision and recall (positive predictive value and sensitivity) were used as evaluation measures, with F-measure providing a single, overall measure of effectiveness. A detailed error analysis was performed on classification errors. Classification of diabetes, influenza, pneumonia and HIV was highly accurate (F-measure 0.96). More fine-grained ICD-10 classification effectiveness was more variable but still high (F-measure 0.80). The error analysis revealed that word variations as well as certain word combinations adversely affected classification. In addition, anomalies in the ground truth likely led to an underestimation of the effectiveness. The high accuracy and low cost of the classification methods allow for an effective means for automatic and real-time surveillance of diabetes, influenza, pneumonia and HIV deaths. In addition, the methods are generally applicable to other diseases of interest and to other sources of medical free-text besides death certificates.

  15. Applying Lean Sigma solutions to mistake-proof the chemotherapy preparation process.

    PubMed

    Aboumatar, Hanan J; Winner, Laura; Davis, Richard; Peterson, Aisha; Hill, Richard; Frank, Susan; Almuete, Virna; Leung, T Vivian; Trovitch, Peter; Farmer, Denise

    2010-02-01

    Errors related to high-alert medications, such as chemotherapeutic agents, have resulted in serious adverse events. A fast-paced application of Lean Sigma methodology was used to safeguard the chemotherapy preparation process against errors and increase compliance with United States Pharmacopeia 797 (USP 797) regulations. On Days 1 and 2 of a Lean Sigma workshop, frontline staff studied the chemotherapy preparation process. During Days 2 and 3, interventions were developed and implementation was started. The workshop participants were satisfied with the speed at which improvements were put to place using the structured workshop format. The multiple opportunities for error identified related to the chemotherapy preparation process, workspace layout, distractions, increased movement around ventilated hood areas, and variation in medication processing and labeling procedures. Mistake-proofing interventions were then introduced via workspace redesign, process redesign, and development of standard operating procedures for pharmacy staff. Interventions were easy to implement and sustainable. Reported medication errors reaching patients and requiring monitoring decreased, whereas the number of reported near misses increased, suggesting improvement in identifying errors before reaching the patients. Application of Lean Sigma solutions enabled the development of a series of relatively inexpensive and easy to implement mistake-proofing interventions that reduce the likelihood of chemotherapy preparation errors and increase compliance with USP 797 regulations. The findings and interventions are generalizable and can inform mistake-proofing interventions in all types of pharmacies.

  16. Measures of Linguistic Accuracy in Second Language Writing Research.

    ERIC Educational Resources Information Center

    Polio, Charlene G.

    1997-01-01

    Investigates the reliability of measures of linguistic accuracy in second language writing. The study uses a holistic scale, error-free T-units, and an error classification system on the essays of English-as-a-Second-Language students and discusses why disagreements arise within a rater and between raters. (24 references) (Author/CK)

  17. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  18. Classification accuracy for stratification with remotely sensed data

    Treesearch

    Raymond L. Czaplewski; Paul L. Patterson

    2003-01-01

    Tools are developed that help specify the classification accuracy required from remotely sensed data. These tools are applied during the planning stage of a sample survey that will use poststratification, prestratification with proportional allocation, or double sampling for stratification. Accuracy standards are developed in terms of an “error matrix,” which is...

  19. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  20. Operating safely in surgery and critical care with perioperative automation.

    PubMed

    Grover, Christopher; Barney, Kate

    2004-01-01

    A study by the Institute of Medicine (IOM) found that as many as 98,000 Americans die each year from preventable medical errors. These findings, combined with a growing spate of negative publicity, have brought patient safety to its rightful place at the healthcare forefront. Nowhere are patient safety issues more critical than in the anesthesia, surgery and critical care environments. These high-acuity settings--with their fast pace, complex and rapidly changing care regimens and mountains of diverse clinical data-arguably pose the greatest patient safety risk in the hospital.

  1. Radiation effects in advanced microelectronics technologies

    NASA Astrophysics Data System (ADS)

    Johnston, A. H.

    1998-06-01

    The pace of device scaling has increased rapidly in recent years. Experimental CMOS devices have been produced with feature sizes below 0.1 /spl mu/m, demonstrating that devices with feature sizes between 0.1 and 0.25 /spl mu/m will likely be available in mainstream technologies after the year 2000. This paper discusses how the anticipated changes in device dimensions and design are likely to affect their radiation response in space environments. Traditional problems, such as total dose effects, SEU and latchup are discussed, along with new phenomena. The latter include hard errors from heavy ions (microdose and gate-rupture errors), and complex failure modes related to advanced circuit architecture. The main focus of the paper is on commercial devices, which are displacing hardened device technologies in many space applications. However, the impact of device scaling on hardened devices is also discussed.

  2. Ensemble of classifiers for confidence-rated classification of NDE signal

    NASA Astrophysics Data System (ADS)

    Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish

    2016-02-01

    Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.

  3. On the statistical assessment of classifiers using DNA microarray data

    PubMed Central

    Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F

    2006-01-01

    Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171

  4. Evaluation of process errors in bed load sampling using a Dune Model

    USGS Publications Warehouse

    Gomez, Basil; Troutman, Brent M.

    1997-01-01

    Reliable estimates of the streamwide bed load discharge obtained using sampling devices are dependent upon good at-a-point knowledge across the full width of the channel. Using field data and information derived from a model that describes the geometric features of a dune train in terms of a spatial process observed at a fixed point in time, we show that sampling errors decrease as the number of samples collected increases, and the number of traverses of the channel over which the samples are collected increases. It also is preferable that bed load sampling be conducted at a pace which allows a number of bed forms to pass through the sampling cross section. The situations we analyze and simulate pertain to moderate transport conditions in small rivers. In such circumstances, bed load sampling schemes typically should involve four or five traverses of a river, and the collection of 20–40 samples at a rate of five or six samples per hour. By ensuring that spatial and temporal variability in the transport process is accounted for, such a sampling design reduces both random and systematic errors and hence minimizes the total error involved in the sampling process.

  5. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  6. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  7. The pot calling the kettle black: the extent and type of errors in a computerized immunization registry and by parent report.

    PubMed

    MacDonald, Shannon E; Schopflocher, Donald P; Golonka, Richard P

    2014-01-04

    Accurate classification of children's immunization status is essential for clinical care, administration and evaluation of immunization programs, and vaccine program research. Computerized immunization registries have been proposed as a valuable alternative to provider paper records or parent report, but there is a need to better understand the challenges associated with their use. This study assessed the accuracy of immunization status classification in an immunization registry as compared to parent report and determined the number and type of errors occurring in both sources. This study was a sub-analysis of a larger study which compared the characteristics of children whose immunizations were up to date (UTD) at two years as compared to those not UTD. Children's immunization status was initially determined from a population-based immunization registry, and then compared to parent report of immunization status, as reported in a postal survey. Discrepancies between the two sources were adjudicated by review of immunization providers' hard-copy clinic records. Descriptive analyses included calculating proportions and confidence intervals for errors in classification and reporting of the type and frequency of errors. Among the 461 survey respondents, there were 60 discrepancies in immunization status. The majority of errors were due to parent report (n = 44), but the registry was not without fault (n = 16). Parents tended to erroneously report their child as UTD, whereas the registry was more likely to wrongly classify children as not UTD. Reasons for registry errors included failure to account for varicella disease history, variable number of doses required due to age at series initiation, and doses administered out of the region. These results confirm that parent report is often flawed, but also identify that registries are prone to misclassification of immunization status. Immunization program administrators and researchers need to institute measures to identify and reduce misclassification, in order for registries to play an effective role in the control of vaccine-preventable disease.

  8. The pot calling the kettle black: the extent and type of errors in a computerized immunization registry and by parent report

    PubMed Central

    2014-01-01

    Background Accurate classification of children’s immunization status is essential for clinical care, administration and evaluation of immunization programs, and vaccine program research. Computerized immunization registries have been proposed as a valuable alternative to provider paper records or parent report, but there is a need to better understand the challenges associated with their use. This study assessed the accuracy of immunization status classification in an immunization registry as compared to parent report and determined the number and type of errors occurring in both sources. Methods This study was a sub-analysis of a larger study which compared the characteristics of children whose immunizations were up to date (UTD) at two years as compared to those not UTD. Children’s immunization status was initially determined from a population-based immunization registry, and then compared to parent report of immunization status, as reported in a postal survey. Discrepancies between the two sources were adjudicated by review of immunization providers’ hard-copy clinic records. Descriptive analyses included calculating proportions and confidence intervals for errors in classification and reporting of the type and frequency of errors. Results Among the 461 survey respondents, there were 60 discrepancies in immunization status. The majority of errors were due to parent report (n = 44), but the registry was not without fault (n = 16). Parents tended to erroneously report their child as UTD, whereas the registry was more likely to wrongly classify children as not UTD. Reasons for registry errors included failure to account for varicella disease history, variable number of doses required due to age at series initiation, and doses administered out of the region. Conclusions These results confirm that parent report is often flawed, but also identify that registries are prone to misclassification of immunization status. Immunization program administrators and researchers need to institute measures to identify and reduce misclassification, in order for registries to play an effective role in the control of vaccine-preventable disease. PMID:24387002

  9. Bayesian Network Structure Learning for Urban Land Use Classification from Landsat ETM+ and Ancillary Data

    NASA Astrophysics Data System (ADS)

    Park, M.; Stenstrom, M. K.

    2004-12-01

    Recognizing urban information from the satellite imagery is problematic due to the diverse features and dynamic changes of urban landuse. The use of Landsat imagery for urban land use classification involves inherent uncertainty due to its spatial resolution and the low separability among land uses. To resolve the uncertainty problem, we investigated the performance of Bayesian networks to classify urban land use since Bayesian networks provide a quantitative way of handling uncertainty and have been successfully used in many areas. In this study, we developed the optimized networks for urban land use classification from Landsat ETM+ images of Marina del Rey area based on USGS land cover/use classification level III. The networks started from a tree structure based on mutual information between variables and added the links to improve accuracy. This methodology offers several advantages: (1) The network structure shows the dependency relationships between variables. The class node value can be predicted even with particular band information missing due to sensor system error. The missing information can be inferred from other dependent bands. (2) The network structure provides information of variables that are important for the classification, which is not available from conventional classification methods such as neural networks and maximum likelihood classification. In our case, for example, bands 1, 5 and 6 are the most important inputs in determining the land use of each pixel. (3) The networks can be reduced with those input variables important for classification. This minimizes the problem without considering all possible variables. We also examined the effect of incorporating ancillary data: geospatial information such as X and Y coordinate values of each pixel and DEM data, and vegetation indices such as NDVI and Tasseled Cap transformation. The results showed that the locational information improved overall accuracy (81%) and kappa coefficient (76%), and lowered the omission and commission errors compared with using only spectral data (accuracy 71%, kappa coefficient 62%). Incorporating DEM data did not significantly improve overall accuracy (74%) and kappa coefficient (66%) but lowered the omission and commission errors. Incorporating NDVI did not much improve the overall accuracy (72%) and k coefficient (65%). Including Tasseled Cap transformation reduced the accuracy (accuracy 70%, kappa 61%). Therefore, additional information from the DEM and vegetation indices was not useful as locational ancillary data.

  10. Gait performance is not influenced by working memory when walking at a self-selected pace.

    PubMed

    Grubaugh, Jordan; Rhea, Christopher K

    2014-02-01

    Gait performance exhibits patterns within the stride-to-stride variability that can be indexed using detrended fluctuation analysis (DFA). Previous work employing DFA has shown that gait patterns can be influenced by constraints, such as natural aging or disease, and they are informative regarding a person's functional ability. Many activities of daily living require concurrent performance in the cognitive and gait domains; specifically working memory is commonly engaged while walking, which is considered dual-tasking. It is unknown if taxing working memory while walking influences gait performance as assessed by DFA. This study used a dual-tasking paradigm to determine if performance decrements are observed in gait or working memory when performed concurrently. Healthy young participants (N = 16) performed a working memory task (automated operation span task) and a gait task (walking at a self-selected speed on a treadmill) in single- and dual-task conditions. A second dual-task condition (reading while walking) was included to control for visual attention, but also introduced a task that taxed working memory over the long term. All trials involving gait lasted at least 10 min. Performance in the working memory task was indexed using five dependent variables (absolute score, partial score, speed error, accuracy error, and math error), while gait performance was indexed by quantifying the mean, standard deviation, and DFA α of the stride interval time series. Two multivariate analyses of variance (one for gait and one for working memory) were used to examine performance in the single- and dual-task conditions. No differences were observed in any of the gait or working memory dependent variables as a function of task condition. The results suggest the locomotor system is adaptive enough to complete a working memory task without compromising gait performance when walking at a self-selected pace.

  11. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  12. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  13. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  14. Accuracy of piezoelectric pedometer and accelerometer step counts.

    PubMed

    Cruz, Joana; Brooks, Dina; Marques, Alda

    2017-04-01

    This study aimed to assess step-count accuracy of a piezoeletric pedometer (Yamax PW/EX-510), when worn at different body parts, and a triaxial accelerometer (GT3X+), and to compare device accuracy; and identify the preferred location(s) to wear a pedometer. Sixty-three healthy adults (45.8±20.6 years old) wore 7 pedometers (neck, lateral right and left of the waist, front right and left of the waist, front pockets of the trousers) and 1 accelerometer (over the right hip), while walking 120 m at slow, self-preferred/normal and fast paces. Steps were recorded. Participants identified their preferred location(s) to wear the pedometer. Absolute percent error (APE) and Bland and Altman (BA) method were used to assess device accuracy (criterion measure: manual counts) and BA method for device comparisons. Pedometer APE was below 3% at normal and fast paces despite wearing location, but higher at slow pace (4.5-9.1%). Pedometers were more accurate at the front waist and inside the pockets. Accelerometer APE was higher than pedometer APE (P<0.05); nevertheless, limits of agreement between devices were relatively small. Preferred wearing locations were inside the front right (N.=25) and left (N.=20) pockets of the trousers. Yamax PW/EX-510 pedometers may be preferable than GT3X+ accelerometers to count steps, as they provide more accurate results. These pedometers should be worn at the front right or left positions of the waist or inside the front pockets of the trousers.

  15. Optimal pacing modes after cardiac transplantation: is synchronisation of recipient and donor atria beneficial?

    PubMed Central

    Parry, Gareth; Malbut, Katie; Dark, John H; Bexton, Rodney S

    1992-01-01

    Objective—To investigate the response of the transplanted heart to different pacing modes and to synchronisation of the recipient and donor atria in terms of cardiac output at rest. Design—Doppler derived cardiac output measurements at three pacing rates (90/min, 110/min and 130/min) in five pacing modes: right ventricular pacing, donor atrial pacing, recipient-donor synchronous pacing, donor atrial-ventricular sequential pacing, and synchronous recipient-donor atrial-ventricular sequential pacing. Patients—11 healthy cardiac transplant recipients with three pairs of epicardial leads inserted at transplantation. Results—Donor atrial pacing (+11% overall) and donor atrial-ventricular sequential pacing (+8% overall) were significantly better than right ventricular pacing (p < 0·001) at all pacing rates. Synchronised pacing of recipient and donor atrial segments did not confer additional benefit in either atrial or atrial-ventricular sequential modes of pacing in terms of cardiac output at rest at these fixed rates. Conclusions—Atrial pacing or atrial-ventricular sequential pacing appear to be appropriate modes in cardiac transplant recipients. Synchronisation of recipient and donor atrial segments in this study produced no additional benefit. Chronotropic competence in these patients may, however, result in improved exercise capacity and deserves further investigation. PMID:1389737

  16. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    PubMed

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  17. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    PubMed Central

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. PMID:29375284

  18. 3D multi-view convolutional neural networks for lung nodule classification

    PubMed Central

    Kang, Guixia; Hou, Beibei; Zhang, Ningbo

    2017-01-01

    The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492

  19. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  20. Classification of electroencephalograph signals using time-frequency decomposition and linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Szuflitowska, B.; Orlowski, P.

    2017-08-01

    Automated detection system consists of two key steps: extraction of features from EEG signals and classification for detection of pathology activity. The EEG sequences were analyzed using Short-Time Fourier Transform and the classification was performed using Linear Discriminant Analysis. The accuracy of the technique was tested on three sets of EEG signals: epilepsy, healthy and Alzheimer's Disease. The classification error below 10% has been considered a success. The higher accuracy are obtained for new data of unknown classes than testing data. The methodology can be helpful in differentiation epilepsy seizure and disturbances in the EEG signal in Alzheimer's Disease.

  1. The Influence of Item Calibration Error on Variable-Length Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2013-01-01

    Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…

  2. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    ERIC Educational Resources Information Center

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María

    2014-01-01

    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  3. Land use surveys by means of automatic interpretation of LANDSAT system data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.

    1981-01-01

    Analyses for seven land-use classes are presented. The classes are: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation, and natural vegetation. The automatic classification of LANDSAT MSS data using a maximum likelihood algorithm shows a 39% average error of emission and a 3.45 error of commission for the seven classes.

  4. Anatomical and/or pathological predictors for the “incorrect” classification of red dot markers on wrist radiographs taken following trauma

    PubMed Central

    Kranz, R

    2015-01-01

    Objective: To establish the prevalence of red dot markers in a sample of wrist radiographs and to identify any anatomical and/or pathological characteristics that predict “incorrect” red dot classification. Methods: Accident and emergency (A&E) wrist cases from a digital imaging and communications in medicine/digital teaching library were examined for red dot prevalence and for the presence of several anatomical and pathological features. Binary logistic regression analyses were run to establish if any of these features were predictors of incorrect red dot classification. Results: 398 cases were analysed. Red dot was “incorrectly” classified in 8.5% of cases; 6.3% were “false negatives” (“FNs”)and 2.3% false positives (FPs) (one decimal place). Old fractures [odds ratio (OR), 5.070 (1.256–20.471)] and reported degenerative change [OR, 9.870 (2.300–42.359)] were found to predict FPs. Frykman V [OR, 9.500 (1.954–46.179)], Frykman VI [OR, 6.333 (1.205–33.283)] and non-Frykman positive abnormalities [OR, 4.597 (1.264–16.711)] predict “FNs”. Old fractures and Frykman VI were predictive of error at 90% confidence interval (CI); the rest at 95% CI. Conclusion: The five predictors of incorrect red dot classification may inform the image interpretation training of radiographers and other professionals to reduce diagnostic error. Verification with larger samples would reinforce these findings. Advances in knowledge: All healthcare providers strive to eradicate diagnostic error. By examining specific anatomical and pathological predictors on radiographs for such error, as well as extrinsic factors that may affect reporting accuracy, image interpretation training can focus on these “problem” areas and influence which radiographic abnormality detection schemes are appropriate to implement in A&E departments. PMID:25496373

  5. How Should Children with Speech Sound Disorders be Classified? A Review and Critical Evaluation of Current Classification Systems

    ERIC Educational Resources Information Center

    Waring, R.; Knight, R.

    2013-01-01

    Background: Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of…

  6. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    PubMed

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Bayes-LQAS: classifying the prevalence of global acute malnutrition

    PubMed Central

    2010-01-01

    Lot Quality Assurance Sampling (LQAS) applications in health have generally relied on frequentist interpretations for statistical validity. Yet health professionals often seek statements about the probability distribution of unknown parameters to answer questions of interest. The frequentist paradigm does not pretend to yield such information, although a Bayesian formulation might. This is the source of an error made in a recent paper published in this journal. Many applications lend themselves to a Bayesian treatment, and would benefit from such considerations in their design. We discuss Bayes-LQAS (B-LQAS), which allows for incorporation of prior information into the LQAS classification procedure, and thus shows how to correct the aforementioned error. Further, we pay special attention to the formulation of Bayes Operating Characteristic Curves and the use of prior information to improve survey designs. As a motivating example, we discuss the classification of Global Acute Malnutrition prevalence and draw parallels between the Bayes and classical classifications schemes. We also illustrate the impact of informative and non-informative priors on the survey design. Results indicate that using a Bayesian approach allows the incorporation of expert information and/or historical data and is thus potentially a valuable tool for making accurate and precise classifications. PMID:20534159

  8. Bayes-LQAS: classifying the prevalence of global acute malnutrition.

    PubMed

    Olives, Casey; Pagano, Marcello

    2010-06-09

    Lot Quality Assurance Sampling (LQAS) applications in health have generally relied on frequentist interpretations for statistical validity. Yet health professionals often seek statements about the probability distribution of unknown parameters to answer questions of interest. The frequentist paradigm does not pretend to yield such information, although a Bayesian formulation might. This is the source of an error made in a recent paper published in this journal. Many applications lend themselves to a Bayesian treatment, and would benefit from such considerations in their design. We discuss Bayes-LQAS (B-LQAS), which allows for incorporation of prior information into the LQAS classification procedure, and thus shows how to correct the aforementioned error. Further, we pay special attention to the formulation of Bayes Operating Characteristic Curves and the use of prior information to improve survey designs. As a motivating example, we discuss the classification of Global Acute Malnutrition prevalence and draw parallels between the Bayes and classical classifications schemes. We also illustrate the impact of informative and non-informative priors on the survey design. Results indicate that using a Bayesian approach allows the incorporation of expert information and/or historical data and is thus potentially a valuable tool for making accurate and precise classifications.

  9. Optimization of the ANFIS using a genetic algorithm for physical work rate classification.

    PubMed

    Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali

    2018-03-13

    Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.

  10. Galaxy Zoo 1: data release of morphological classifications for nearly 900 000 galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linott, C.; Slosar, A.; Lintott, C.

    Morphology is a powerful indicator of a galaxy's dynamical and merger history. It is strongly correlated with many physical parameters, including mass, star formation history and the distribution of mass. The Galaxy Zoo project collected simple morphological classifications of nearly 900,000 galaxies drawn from the Sloan Digital Sky Survey, contributed by hundreds of thousands of volunteers. This large number of classifications allows us to exclude classifier error, and measure the influence of subtle biases inherent in morphological classification. This paper presents the data collected by the project, alongside measures of classification accuracy and bias. The data are now publicly availablemore » and full catalogues can be downloaded in electronic format from http://data.galaxyzoo.org.« less

  11. Assessment of the perception of verticality and horizontality with self-paced saccades.

    PubMed

    Pettorossi, V E; Bambagioni, D; Bronstein, A M; Gresty, M A

    1998-07-01

    We investigated the ability of human subjects (Ss) to make self-paced saccades in the earth-vertical and horizontal directions (space-referenced task) and in the direction of the head-vertical and horizontal axis (self-referenced task) during whole body tilts of 0 degrees, 22.5 degrees, 45 degrees and 90 degrees in the frontal (roll) plane. Saccades were recorded in the dark with computerised video-oculography. During space-referenced tasks, the saccade vectors did not fully counter-rotate to compensate for larger angles of body tilt. This finding is in agreement with the 'A' effect reported for the visual vertical. The error was significantly larger for saccades intended to be space-horizontal than space-vertical. This vertico-horizontal dissociation implies greater difficulty in defining horizontality than verticality with the non-visual motor task employed. In contrast, normal Ss (and an alabyrinthine subject tested) were accurate in orienting saccades to their own (cranio-centric) vertical and horizontal axes regardless of tilt indicating that cranio-centric perception is robust and apparently not affected by gravitational influences.

  12. Spotting East African mammals in open savannah from space.

    PubMed

    Yang, Zheng; Wang, Tiejun; Skidmore, Andrew K; de Leeuw, Jan; Said, Mohammed Y; Freer, Jim

    2014-01-01

    Knowledge of population dynamics is essential for managing and conserving wildlife. Traditional methods of counting wild animals such as aerial survey or ground counts not only disturb animals, but also can be labour intensive and costly. New, commercially available very high-resolution satellite images offer great potential for accurate estimates of animal abundance over large open areas. However, little research has been conducted in the area of satellite-aided wildlife census, although computer processing speeds and image analysis algorithms have vastly improved. This paper explores the possibility of detecting large animals in the open savannah of Maasai Mara National Reserve, Kenya from very high-resolution GeoEye-1 satellite images. A hybrid image classification method was employed for this specific purpose by incorporating the advantages of both pixel-based and object-based image classification approaches. This was performed in two steps: firstly, a pixel-based image classification method, i.e., artificial neural network was applied to classify potential targets with similar spectral reflectance at pixel level; and then an object-based image classification method was used to further differentiate animal targets from the surrounding landscapes through the applications of expert knowledge. As a result, the large animals in two pilot study areas were successfully detected with an average count error of 8.2%, omission error of 6.6% and commission error of 13.7%. The results of the study show for the first time that it is feasible to perform automated detection and counting of large wild animals in open savannahs from space, and therefore provide a complementary and alternative approach to the conventional wildlife survey techniques.

  13. Comparison of three methods for long-term monitoring of boreal lake area using Landsat TM and ETM+ imagery

    USGS Publications Warehouse

    Roach, Jennifer K.; Griffith, Brad; Verbyla, David

    2012-01-01

    Programs to monitor lake area change are becoming increasingly important in high latitude regions, and their development often requires evaluating tradeoffs among different approaches in terms of accuracy of measurement, consistency across multiple users over long time periods, and efficiency. We compared three supervised methods for lake classification from Landsat imagery (density slicing, classification trees, and feature extraction). The accuracy of lake area and number estimates was evaluated relative to high-resolution aerial photography acquired within two days of satellite overpasses. The shortwave infrared band 5 was better at separating surface water from nonwater when used alone than when combined with other spectral bands. The simplest of the three methods, density slicing, performed best overall. The classification tree method resulted in the most omission errors (approx. 2x), feature extraction resulted in the most commission errors (approx. 4x), and density slicing had the least directional bias (approx. half of the lakes with overestimated area and half of the lakes with underestimated area). Feature extraction was the least consistent across training sets (i.e., large standard error among different training sets). Density slicing was the best of the three at classifying small lakes as evidenced by its lower optimal minimum lake size criterion of 5850 m2 compared with the other methods (8550 m2). Contrary to conventional wisdom, the use of additional spectral bands and a more sophisticated method not only required additional processing effort but also had a cost in terms of the accuracy and consistency of lake classifications.

  14. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  15. Electroencephalography epilepsy classifications using hybrid cuckoo search and neural network

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Damayanti, A.; Miswanto

    2017-07-01

    Epilepsy is a condition that affects the brain and causes repeated seizures. This seizure is episodes that can vary and nearly undetectable to long periods of vigorous shaking or brain contractions. Epilepsy often can be confirmed with an electrocephalography (EEG). Neural Networks has been used in biomedic signal analysis, it has successfully classified the biomedic signal, such as EEG signal. In this paper, a hybrid cuckoo search and neural network are used to recognize EEG signal for epilepsy classifications. The weight of the multilayer perceptron is optimized by the cuckoo search algorithm based on its error. The aim of this methods is making the network faster to obtained the local or global optimal then the process of classification become more accurate. Based on the comparison results with the traditional multilayer perceptron, the hybrid cuckoo search and multilayer perceptron provides better performance in term of error convergence and accuracy. The purpose methods give MSE 0.001 and accuracy 90.0 %.

  16. Clinical evaluation of a thin bipolar pacing lead.

    PubMed

    Breivik, K; Danilovic, D; Ohm, O J; Guerola, M; Stertman, W A; Suntinger, A

    1997-03-01

    The main disadvantages of bipolar pacing leads have traditionally been related to their relative thickness and stiffness compared to unipolar leads. In a new "drawn filled tube" plus "coated wire" technology, each conductor strand is composed of MP35N tubing filled with silver core and coated with a thin ETFE polymer insulation material. This and parallel winding of single anode and cathode conductors into a single bifilar coil resulted in a bipolar lead (ThinLine, Intermedics) with a body diameter and flexibility similar to unipolar leads. The lead is tined, polyurethane, with the cathode and the anode made of iridium-oxide-coated titanium (IROX). The slotted 8-mm2 cathode tip is coated with polyethylene glycol, a blood soluble material. We present the clinical evaluation results from four pacemaker clinics, where 47 leads (23 atrial-J model 432-04 and 24 ventricular model 430-10) were implanted in 25 patients and followed for up to 2 years. The lead handling characteristics were found to be very satisfactory. Electrical parameters of the leads were measured at implant and noninvasively on postoperative days 1, 2, 21, 42, and months 3, 6, 12, and 24. Mean chronic pulse width thresholds at 2.5 V were 0.14 +/- 0.05 ms in the atrium and 0.10 +/- 0.02 ms in the ventricle, pacing impedances 443 +/- 104 omega and 520 +/- 241 omega, while median electrogram amplitudes were > or = 3.5 mV and > or = 7 mV, respectively. Pacing impedances and thresholds were found to be slightly but statistically significantly higher in unipolar than in bipolar configuration--the findings are explainable by the lead construction. One of 47 leads failed 3 weeks after implant; the conductors were short circuited due to an error during the manufacturing process. We conclude that the new lead thus far has demonstrated appropriate mechanical and electrical characteristics.

  17. Presentation Time Concerning System-Paced Multimedia Instructions and the Superiority of Learner Pacing

    ERIC Educational Resources Information Center

    Stiller, Klaus D.; Petzold, Kirstin; Zinnbauer, Peter

    2011-01-01

    The superiority of learner-paced over system-paced instructions was demonstrated in multiple experiments. In these experiments, the system-paced presentations were highly speeded, causing cognitive overload, while the learner-paced instructions allowed adjustments of the presentational flow to the learner's needs by pacing facilities, mostly…

  18. Peculiarities of use of ECOC and AdaBoost based classifiers for thematic processing of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.

    2017-10-01

    Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.

  19. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    PubMed Central

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  20. Land use in the Paraiba Valley through remotely sensed data. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.

    1980-01-01

    A methodology for land use survey was developed and land use modification rates were determined using LANDSAT imagery of the Paraiba Valley (state of Sao Paulo). Both visual and automatic interpretation methods were employed to analyze seven land use classes: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation and natural vegetation. By means of visual interpretation, little spectral differences are observed among those classes. The automatic classification of LANDSAT MSS data using maximum likelihood algorithm shows a 39% average error of omission and a 3.4% error of inclusion for the seven classes. The complexity of land uses in the study area, the large spectral variations of analyzed classes, and the low resolution of LANDSAT data influenced the classification results.

  1. Multilayer perceptron, fuzzy sets, and classification

    NASA Technical Reports Server (NTRS)

    Pal, Sankar K.; Mitra, Sushmita

    1992-01-01

    A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.

  2. One-Class Classification-Based Real-Time Activity Error Detection in Smart Homes.

    PubMed

    Das, Barnan; Cook, Diane J; Krishnan, Narayanan C; Schmitter-Edgecombe, Maureen

    2016-08-01

    Caring for individuals with dementia is frequently associated with extreme physical and emotional stress, which often leads to depression. Smart home technology and advances in machine learning techniques can provide innovative solutions to reduce caregiver burden. One key service that caregivers provide is prompting individuals with memory limitations to initiate and complete daily activities. We hypothesize that sensor technologies combined with machine learning techniques can automate the process of providing reminder-based interventions. The first step towards automated interventions is to detect when an individual faces difficulty with activities. We propose machine learning approaches based on one-class classification that learn normal activity patterns. When we apply these classifiers to activity patterns that were not seen before, the classifiers are able to detect activity errors, which represent potential prompt situations. We validate our approaches on smart home sensor data obtained from older adult participants, some of whom faced difficulties performing routine activities and thus committed errors.

  3. Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data

    NASA Astrophysics Data System (ADS)

    Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.

    2017-06-01

    Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.

  4. Exploring diversity in ensemble classification: Applications in large area land cover mapping

    NASA Astrophysics Data System (ADS)

    Mellor, Andrew; Boukir, Samia

    2017-07-01

    Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area remote sensing applications, for which training data is costly and resource intensive to collect.

  5. Analysis of swallowing sounds using hidden Markov models.

    PubMed

    Aboofazeli, Mohammad; Moussavi, Zahra

    2008-04-01

    In recent years, acoustical analysis of the swallowing mechanism has received considerable attention due to its diagnostic potentials. This paper presents a hidden Markov model (HMM) based method for the swallowing sound segmentation and classification. Swallowing sound signals of 15 healthy and 11 dysphagic subjects were studied. The signals were divided into sequences of 25 ms segments each of which were represented by seven features. The sequences of features were modeled by HMMs. Trained HMMs were used for segmentation of the swallowing sounds into three distinct phases, i.e., initial quiet period, initial discrete sounds (IDS) and bolus transit sounds (BTS). Among the seven features, accuracy of segmentation by the HMM based on multi-scale product of wavelet coefficients was higher than that of the other HMMs and the linear prediction coefficient (LPC)-based HMM showed the weakest performance. In addition, HMMs were used for classification of the swallowing sounds of healthy subjects and dysphagic patients. Classification accuracy of different HMM configurations was investigated. When we increased the number of states of the HMMs from 4 to 8, the classification error gradually decreased. In most cases, classification error for N=9 was higher than that of N=8. Among the seven features used, root mean square (RMS) and waveform fractal dimension (WFD) showed the best performance in the HMM-based classification of swallowing sounds. When the sequences of the features of IDS segment were modeled separately, the accuracy reached up to 85.5%. As a second stage classification, a screening algorithm was used which correctly classified all the subjects but one healthy subject when RMS was used as characteristic feature of the swallowing sounds and the number of states was set to N=8.

  6. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  7. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    ERIC Educational Resources Information Center

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  8. Estimation of a cover-type change matrix from error-prone data

    Treesearch

    Steen Magnussen

    2009-01-01

    Coregistration and classification errors seriously compromise per-pixel estimates of land cover change. A more robust estimation of change is proposed in which adjacent pixels are grouped into 3x3 clusters and treated as a unit of observation. A complete change matrix is recovered in a two-step process. The diagonal elements of a change matrix are recovered from...

  9. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  10. North American vegetation model for land-use planning in a changing climate: A solution to large classification problems

    Treesearch

    Gerald E. Rehfeldt; Nicholas L. Crookston; Cuauhtemoc Saenz-Romero; Elizabeth M. Campbell

    2012-01-01

    Data points intensively sampling 46 North American biomes were used to predict the geographic distribution of biomes from climate variables using the Random Forests classification tree. Techniques were incorporated to accommodate a large number of classes and to predict the future occurrence of climates beyond the contemporary climatic range of the biomes. Errors of...

  11. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  12. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  13. Temporary atrial epicardial pacing as prophylaxis against atrial fibrillation after heart surgery: a meta-analysis.

    PubMed

    Daoud, Emile G; Snow, Rick; Hummel, John D; Kalbfleisch, Steven J; Weiss, Raul; Augostini, Ralph

    2003-02-01

    Recent studies have reported the use of temporary epicardial atrial pacing as prophylaxis for postoperative atrial fibrillation (AF). The aim of this study was to assess the effect of pacing therapies for prevention of postoperative AF using meta-analysis. Using a computerized MEDLINE search, eight pacing prophylaxis trials with 776 patients were included in the meta-analysis. Trials compared control patients to patients randomized to right atrial, left atrial, or biatrial pacing used in conjunction with either fixed high-rate pacing or overdrive pacing. Overdrive biatrial pacing (OR 2.6, CI 1.4-4.8), overdrive right atrial pacing (OR 1.8, CI 1.1-2.7), and fixed high-rate biatrial pacing (OR 2.5, CI 1.3-5.1) demonstrated a significant antiarrhythmic effect for prevention of AF after open heart surgery. Furthermore, studies investigating overdrive left atrial pacing and fixed high-rate right atrial pacing have been underpowered to assess efficacy. Biatrial overdrive and fixed high-rate pacing and right atrial fixed high-rate pacing reduced the risk of new-onset AF after open heart surgery, and the relative risk reduction is approximately 2.5-fold. These results imply that various pacing algorithms are useful as a nonpharmacologic method to prevent postoperative AF.

  14. Identification of terrain cover using the optimum polarimetric classifier

    NASA Technical Reports Server (NTRS)

    Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.

  15. Classifying nursing errors in clinical management within an Australian hospital.

    PubMed

    Tran, D T; Johnson, M

    2010-12-01

    Although many classification systems relating to patient safety exist, no taxonomy was identified that classified nursing errors in clinical management. To develop a classification system for nursing errors relating to clinical management (NECM taxonomy) and to describe contributing factors and patient consequences. We analysed 241 (11%) self-reported incidents relating to clinical management in nursing in a metropolitan hospital. Descriptive analysis of numeric data and content analysis of text data were undertaken to derive the NECM taxonomy, contributing factors and consequences for patients. Clinical management incidents represented 1.63 incidents per 1000 occupied bed days. The four themes of the NECM taxonomy were nursing care process (67%), communication (22%), administrative process (5%), and knowledge and skill (6%). Half of the incidents did not cause any patient harm. Contributing factors (n=111) included the following: patient clinical, social conditions and behaviours (27%); resources (22%); environment and workload (18%); other health professionals (15%); communication (13%); and nurse's knowledge and experience (5%). The NECM taxonomy provides direction to clinicians and managers on areas in clinical management that are most vulnerable to error, and therefore, priorities for system change management. Any nurses who wish to classify nursing errors relating to clinical management could use these types of errors. This study informs further research into risk management behaviour, and self-assessment tools for clinicians. Globally, nurses need to continue to monitor and act upon patient safety issues. © 2010 The Authors. International Nursing Review © 2010 International Council of Nurses.

  16. In-vivo determination of chewing patterns using FBG and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pegorini, Vinicius; Zen Karam, Leandro; Rocha Pitta, Christiano S.; Ribeiro, Richardson; Simioni Assmann, Tangriani; Cardozo da Silva, Jean Carlos; Bertotti, Fábio L.; Kalinowski, Hypolito J.; Cardoso, Rafael

    2015-09-01

    This paper reports the process of pattern classification of the chewing process of ruminants. We propose a simplified signal processing scheme for optical fiber Bragg grating (FBG) sensors based on machine learning techniques. The FBG sensors measure the biomechanical forces during jaw movements and an artificial neural network is responsible for the classification of the associated chewing pattern. In this study, three patterns associated to dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior studies were monitored, rumination and idle period. Experimental results show that the proposed approach for pattern classification has been capable of differentiating the materials involved in the chewing process with a small classification error.

  17. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  18. Hyperspectral image classification based on local binary patterns and PCANet

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  19. Influence of nuclei segmentation on breast cancer malignancy classification

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  20. Sub-pixel image classification for forest types in East Texas

    NASA Astrophysics Data System (ADS)

    Westbrook, Joey

    Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.

  1. 42 CFR 460.60 - PACE organizational structure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false PACE organizational structure. 460.60 Section 460.60 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... ELDERLY (PACE) PACE Administrative Requirements § 460.60 PACE organizational structure. (a) A PACE...

  2. A real-time heat strain risk classifier using heart rate and skin temperature.

    PubMed

    Buller, Mark J; Latzka, William A; Yokota, Miyo; Tharion, William J; Moran, Daniel S

    2008-12-01

    Heat injury is a real concern to workers engaged in physically demanding tasks in high heat strain environments. Several real-time physiological monitoring systems exist that can provide indices of heat strain, e.g. physiological strain index (PSI), and provide alerts to medical personnel. However, these systems depend on core temperature measurement using expensive, ingestible thermometer pills. Seeking a better solution, we suggest the use of a model which can identify the probability that individuals are 'at risk' from heat injury using non-invasive measures. The intent is for the system to identify individuals who need monitoring more closely or who should apply heat strain mitigation strategies. We generated a model that can identify 'at risk' (PSI 7.5) workers from measures of heart rate and chest skin temperature. The model was built using data from six previously published exercise studies in which some subjects wore chemical protective equipment. The model has an overall classification error rate of 10% with one false negative error (2.7%), and outperforms an earlier model and a least squares regression model with classification errors of 21% and 14%, respectively. Additionally, the model allows the classification criteria to be adjusted based on the task and acceptable level of risk. We conclude that the model could be a valuable part of a multi-faceted heat strain management system.

  3. Physiologic pacing: new modalities and pacing sites.

    PubMed

    Padeletti, Luigi; Lieberman, Randy; Valsecchi, Sergio; Hettrick, Douglas A

    2006-12-01

    Right ventricular (RV) apical pacing impairs left ventricular function by inducing dys-synchronous contraction and relaxation. Chronic RV apical pacing is associated with an increased risk of atrial fibrillation, morbidity, and even mortality. These observations have raised questions regarding the appropriate pacing mode and site, leading to the introduction of algorithms and new pacing modes to reduce the ventricular pacing burden in dual chamber devices, and a shift of the pacing site away from the RV apex. However, further investigations are required to assess the long-term results of pacing from alternative sites in the right ventricle, because long-term results so far are equivocal. The potential benefit of prophylactic biventricular, mono-chamber left ventricular, and bifocal RV pacing should be explored in selected patients with a narrow QRS complex, especially those with impaired left ventricular function. His bundle pacing is a promising and evolving technique that requires improvements in lead technology.

  4. Validation of the ADAMO Care Watch for step counting in older adults.

    PubMed

    Magistro, Daniele; Brustio, Paolo Riccardo; Ivaldi, Marco; Esliger, Dale Winfield; Zecca, Massimiliano; Rainoldi, Alberto; Boccia, Gennaro

    2018-01-01

    Accurate measurement devices are required to objectively quantify physical activity. Wearable activity monitors, such as pedometers, may serve as affordable and feasible instruments for measuring physical activity levels in older adults during their normal activities of daily living. Currently few available accelerometer-based steps counting devices have been shown to be accurate at slow walking speeds, therefore there is still lacking appropriate devices tailored for slow speed ambulation, typical of older adults. This study aimed to assess the validity of step counting using the pedometer function of the ADAMO Care Watch, containing an embedded algorithm for measuring physical activity in older adults. Twenty older adults aged ≥ 65 years (mean ± SD, 75±7 years; range, 68-91) and 20 young adults (25±5 years, range 20-40), wore a care watch on each wrist and performed a number of randomly ordered tasks: walking at slow, normal and fast self-paced speeds; a Timed Up and Go test (TUG); a step test and ascending/descending stairs. The criterion measure was the actual number of steps observed, counted with a manual tally counter. Absolute percentage error scores, Intraclass Correlation Coefficients (ICC), and Bland-Altman plots were used to assess validity. ADAMO Care Watch demonstrated high validity during slow and normal speeds (range 0.5-1.5 m/s) showing an absolute error from 1.3% to 1.9% in the older adult group and from 0.7% to 2.7% in the young adult group. The percentage error for the 30-metre walking tasks increased with faster pace in both young adult (17%) and older adult groups (6%). In the TUG test, there was less error in the steps recorded for older adults (1.3% to 2.2%) than the young adults (6.6% to 7.2%). For the total sample, the ICCs for the ADAMO Care Watch for the 30-metre walking tasks at each speed and for the TUG test were ranged between 0.931 to 0.985. These findings provide evidence that the ADAMO Care Watch demonstrated highly accurate measurements of the steps count in all activities, particularly walking at normal and slow speeds. Therefore, these data support the inclusion of the ADAMO Care Watch in clinical applications for measuring the number of steps taken by older adults at normal, slow walking speeds.

  5. Automated Segmentation Errors When Using Optical Coherence Tomography to Measure Retinal Nerve Fiber Layer Thickness in Glaucoma.

    PubMed

    Mansberger, Steven L; Menda, Shivali A; Fortune, Brad A; Gardiner, Stuart K; Demirel, Shaban

    2017-02-01

    To characterize the error of optical coherence tomography (OCT) measurements of retinal nerve fiber layer (RNFL) thickness when using automated retinal layer segmentation algorithms without manual refinement. Cross-sectional study. This study was set in a glaucoma clinical practice, and the dataset included 3490 scans from 412 eyes of 213 individuals with a diagnosis of glaucoma or glaucoma suspect. We used spectral domain OCT (Spectralis) to measure RNFL thickness in a 6-degree peripapillary circle, and exported the native "automated segmentation only" results. In addition, we exported the results after "manual refinement" to correct errors in the automated segmentation of the anterior (internal limiting membrane) and the posterior boundary of the RNFL. Our outcome measures included differences in RNFL thickness and glaucoma classification (i.e., normal, borderline, or outside normal limits) between scans with automated segmentation only and scans using manual refinement. Automated segmentation only resulted in a thinner global RNFL thickness (1.6 μm thinner, P < .001) when compared to manual refinement. When adjusted by operator, a multivariate model showed increased differences with decreasing RNFL thickness (P < .001), decreasing scan quality (P < .001), and increasing age (P < .03). Manual refinement changed 298 of 3486 (8.5%) of scans to a different global glaucoma classification, wherein 146 of 617 (23.7%) of borderline classifications became normal. Superior and inferior temporal clock hours had the largest differences. Automated segmentation without manual refinement resulted in reduced global RNFL thickness and overestimated the classification of glaucoma. Differences increased in eyes with a thinner RNFL thickness, older age, and decreased scan quality. Operators should inspect and manually refine OCT retinal layer segmentation when assessing RNFL thickness in the management of patients with glaucoma. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. 42 CFR 460.6 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Basis... enrolled in a PACE program. PACE stands for programs of all-inclusive care for the elderly. PACE center is... care for the elderly that is operated by an approved PACE organization and that provides comprehensive...

  7. 42 CFR 460.6 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Basis... enrolled in a PACE program. PACE stands for programs of all-inclusive care for the elderly. PACE center is... care for the elderly that is operated by an approved PACE organization and that provides comprehensive...

  8. 42 CFR 460.180 - Medicare payment to PACE organizations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Medicare payment to PACE organizations. 460.180... FOR THE ELDERLY (PACE) Payment § 460.180 Medicare payment to PACE organizations. (a) Principle of payment. Under a PACE program agreement, CMS makes a prospective monthly payment to the PACE organization...

  9. Acquiring Research-grade ALSM Data in the Commercial Marketplace

    NASA Astrophysics Data System (ADS)

    Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.

    2003-12-01

    The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.

  10. Brain fingerprinting classification concealed information test detects US Navy military medical information with P300

    PubMed Central

    Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.

    2014-01-01

    A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941

  11. Morbidity Assessment in Surgery: Refinement Proposal Based on a Concept of Perioperative Adverse Events

    PubMed Central

    Kazaryan, Airazat M.; Røsok, Bård I.; Edwin, Bjørn

    2013-01-01

    Background. Morbidity is a cornerstone assessing surgical treatment; nevertheless surgeons have not reached extensive consensus on this problem. Methods and Findings. Clavien, Dindo, and Strasberg with coauthors (1992, 2004, 2009, and 2010) made significant efforts to the standardization of surgical morbidity (Clavien-Dindo-Strasberg classification, last revision, the Accordion classification). However, this classification includes only postoperative complications and has two principal shortcomings: disregard of intraoperative events and confusing terminology. Postoperative events have a major impact on patient well-being. However, intraoperative events should also be recorded and reported even if they do not evidently affect the patient's postoperative well-being. The term surgical complication applied in the Clavien-Dindo-Strasberg classification may be regarded as an incident resulting in a complication caused by technical failure of surgery, in contrast to the so-called medical complications. Therefore, the term surgical complication contributes to misinterpretation of perioperative morbidity. The term perioperative adverse events comprising both intraoperative unfavourable incidents and postoperative complications could be regarded as better alternative. In 2005, Satava suggested a simple grading to evaluate intraoperative surgical errors. Based on that approach, we have elaborated a 3-grade classification of intraoperative incidents so that it can be used to grade intraoperative events of any type of surgery. Refinements have been made to the Accordion classification of postoperative complications. Interpretation. The proposed systematization of perioperative adverse events utilizing the combined application of two appraisal tools, that is, the elaborated classification of intraoperative incidents on the basis of the Satava approach to surgical error evaluation together with the modified Accordion classification of postoperative complication, appears to be an effective tool for comprehensive assessment of surgical outcomes. This concept was validated in regard to various surgical procedures. Broad implementation of this approach will promote the development of surgical science and practice. PMID:23762627

  12. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  13. DMSP SSJ4 Data Restoration, Classification, and On-Line Data Access

    NASA Technical Reports Server (NTRS)

    Wing, Simon; Bredekamp, Joseph H. (Technical Monitor)

    2000-01-01

    Compress and clean raw data file for permanent storage We have identified various error conditions/types and developed algorithms to get rid of these errors/noises, including the more complicated noise in the newer data sets. (status = 100% complete). Internet access of compacted raw data. It is now possible to access the raw data via our web site, http://www.jhuapl.edu/Aurora/index.html. The software to read and plot the compacted raw data is also available from the same web site. The users can now download the raw data, read, plot, or manipulate the data as they wish on their own computer. The users are able to access the cleaned data sets. Internet access of the color spectrograms. This task has also been completed. It is now possible to access the spectrograms from the web site mentioned above. Improve the particle precipitation region classification. The algorithm for doing this task has been developed and implemented. As a result, the accuracies improved. Now the web site routinely distributes the results of applying the new algorithm to the cleaned data set. Mark the classification region on the spectrograms. The software to mark the classification region in the spectrograms has been completed. This is also available from our web site.

  14. Malingering in Toxic Exposure. Classification Accuracy of Reliable Digit Span and WAIS-III Digit Span Scaled Scores

    ERIC Educational Resources Information Center

    Greve, Kevin W.; Springer, Steven; Bianchini, Kevin J.; Black, F. William; Heinly, Matthew T.; Love, Jeffrey M.; Swift, Douglas A.; Ciota, Megan A.

    2007-01-01

    This study examined the sensitivity and false-positive error rate of reliable digit span (RDS) and the WAIS-III Digit Span (DS) scaled score in persons alleging toxic exposure and determined whether error rates differed from published rates in traumatic brain injury (TBI) and chronic pain (CP). Data were obtained from the files of 123 persons…

  15. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  16. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Perricone, B. T.

    1982-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  17. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  18. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  19. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  20. Redeeming Hollow Promises: The Case for Mandatory Spending on Health Care for American Indians and Alaska Natives

    PubMed Central

    Westmoreland, Timothy M.; Watson, Kathryn R.

    2006-01-01

    The reliance on discretionary spending for American Indian/ Alaska Native health care has produced a system that is insufficient and unreliable and is associated with ongoing health disparities. Moreover, the gap between mandatory spending on a Medicare beneficiary and discretionary spending on an American Indian/Alaska Native beneficiary has grown dramatically, thus compounding the problem. The budget classification for American Indian/Alaska Native health services should be changed, and health care delivery to this population should be designated as mandatory spending. If a correct structure is in place, mandatory spending is more likely to provide adequate funding that keeps pace with changes in costs and need. PMID:16507732

  1. Rewards-driven control of robot arm by decoding EEG signals.

    PubMed

    Tanwani, Ajay Kumar; del R Millan, Jose; Billard, Aude

    2014-01-01

    Decoding the user intention from non-invasive EEG signals is a challenging problem. In this paper, we study the feasibility of predicting the goal for controlling the robot arm in self-paced reaching movements, i.e., spontaneous movements that do not require an external cue. Our proposed system continuously estimates the goal throughout a trial starting before the movement onset by online classification and generates optimal trajectories for driving the robot arm to the estimated goal. Experiments using EEG signals of one healthy subject (right arm) yield smooth reaching movements of the simulated 7 degrees of freedom KUKA robot arm in planar center-out reaching task with approximately 80% accuracy of reaching the actual goal.

  2. Molecular profiling of sarcomas: new vistas for precision medicine.

    PubMed

    Al-Zaid, Tariq; Wang, Wei-Lien; Somaiah, Neeta; Lazar, Alexander J

    2017-08-01

    Sarcoma is a large and heterogeneous group of malignant mesenchymal neoplasms with significant histological overlap. Accurate diagnosis can be challenging yet important for selecting the appropriate treatment approach and prognosis. The currently torrid pace of new genomic discoveries aids our classification and diagnosis of sarcomas, understanding of pathogenesis, development of new medications, and identification of alterations that predict prognosis and response to therapy. Unfortunately, demonstrating effective targets for precision oncology has been elusive in most sarcoma types. The list of potential targets greatly outnumbers the list of available inhibitors at the present time. This review will discuss the role of molecular profiling in sarcomas in general with emphasis on selected entities with particular clinical relevance.

  3. Analysis and application of classification methods of complex carbonate reservoirs

    NASA Astrophysics Data System (ADS)

    Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei

    2018-06-01

    There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.

  4. Study on Classification Accuracy Inspection of Land Cover Data Aided by Automatic Image Change Detection Technology

    NASA Astrophysics Data System (ADS)

    Xie, W.-J.; Zhang, L.; Chen, H.-P.; Zhou, J.; Mao, W.-J.

    2018-04-01

    The purpose of carrying out national geographic conditions monitoring is to obtain information of surface changes caused by human social and economic activities, so that the geographic information can be used to offer better services for the government, enterprise and public. Land cover data contains detailed geographic conditions information, thus has been listed as one of the important achievements in the national geographic conditions monitoring project. At present, the main issue of the production of the land cover data is about how to improve the classification accuracy. For the land cover data quality inspection and acceptance, classification accuracy is also an important check point. So far, the classification accuracy inspection is mainly based on human-computer interaction or manual inspection in the project, which are time consuming and laborious. By harnessing the automatic high-resolution remote sensing image change detection technology based on the ERDAS IMAGINE platform, this paper carried out the classification accuracy inspection test of land cover data in the project, and presented a corresponding technical route, which includes data pre-processing, change detection, result output and information extraction. The result of the quality inspection test shows the effectiveness of the technical route, which can meet the inspection needs for the two typical errors, that is, missing and incorrect update error, and effectively reduces the work intensity of human-computer interaction inspection for quality inspectors, and also provides a technical reference for the data production and quality control of the land cover data.

  5. Direct His bundle pacing post AVN ablation.

    PubMed

    Lakshmanadoss, Umashankar; Aggarwal, Ashim; Huang, David T; Daubert, James P; Shah, Abrar

    2009-08-01

    Atrioventricular nodal (AVN) ablation with concomitant pacemaker implantation is one of the strategies that reduce symptoms in patients with atrial fibrillation (AF). However, the long-term adverse effects of right ventricular (RV) apical pacing have led to the search for alternating sites of pacing. Biventricular pacing produces a significant improvement in functional capacity over RV pacing in patients undergoing AVN ablation. Another alternative site for pacing is direct His bundle to reduce the adverse outcome of RV pacing. Here, we present a case of direct His bundle pacing using steerable lead delivery system in a patient with symptomatic paroxysmal AF with concurrent AVN ablation.

  6. 42 CFR 460.34 - Duration of PACE program agreement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Program Agreement § 460.34 Duration of PACE program agreement. An agreement is...

  7. 42 CFR 460.34 - Duration of PACE program agreement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Program Agreement § 460.34 Duration of PACE program agreement. An agreement is...

  8. Improved EEG Event Classification Using Differential Energy.

    PubMed

    Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J

    2015-12-01

    Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.

  9. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  10. Benchmark data on the separability among crops in the southern San Joaquin Valley of California

    NASA Technical Reports Server (NTRS)

    Morse, A.; Card, D. H.

    1984-01-01

    Landsat MSS data were input to a discriminant analysis of 21 crops on each of eight dates in 1979 using a total of 4,142 fields in southern Fresno County, California. The 21 crops, which together account for over 70 percent of the agricultural acreage in the southern San Joaquin Valley, were analyzed to quantify the spectral separability, defined as omission error, between all pairs of crops. On each date the fields were segregated into six groups based on the mean value of the MSS7/MSS5 ratio, which is correlated with green biomass. Discriminant analysis was run on each group on each date. The resulting contingency tables offer information that can be profitably used in conjunction with crop calendars to pick the best dates for a classification. The tables show expected percent correct classification and error rates for all the crops. The patterns in the contingency tables show that the percent correct classification for crops generally increases with the amount of greenness in the fields being classified. However, there are exceptions to this general rule, notably grain.

  11. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  12. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  13. Remote sensing of submerged aquatic vegetation in lower Chesapeake Bay - A comparison of Landsat MSS to TM imagery

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1987-01-01

    Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.

  14. Canine left ventricle electromechanical behavior under different pacing modes.

    PubMed

    Vo Thang, Thanh-Thuy; Thibault, Bernard; Finnerty, Vincent; Pelletier-Galarneau, Matthieu; Khairy, Paul; Grégoire, Jean; Harel, François

    2012-10-01

    Cardiac resynchronization therapy may improve survival and quality of life in patients suffering from heart failure with left ventricular (LV) contraction dyssynchrony. While several studies have investigated electrical or mechanical determinants of synchronous contraction, few have focused on activation contraction coupling at a macroscopic level. The objective of the study was to characterize LV electromechanical behavior and response to pacing in a heart failure model. We analyzed data from 3D electroanatomic non-contact mapping and blood pool SPECT for 12 dogs with right ventricular (RV) tachycardia pacing-induced dilated cardiomyopathy. Surfaces generated by the two modalities were registered. Electrical signals were analyzed, and endocardial wall displacement curves were portrayed. Rapid pacing decreased the mean LV ejection fraction (LVEF) to 20.9 % and prolonged the QRS duration to 79 ± 10 ms (normal range: 40-50 ms). QRS duration remained unchanged with biventricular pacing (88.5 ms), while single site pacing further prolonged the QRS duration (113.3 ms for RV pacing and 111.6 ms for LV pacing). No trend was observed in LV systolic function. Activation duration time was significantly increased with all pacing modes compared to baseline. Finally, electromechanical delay, as defined by the delay between electrical activation and mechanical response, was increased by single site pacing (172.9 ms for RV pacing and 174.6 ms for LV pacing) but not by biventricular pacing (162.4 ms). Combined temporal and spatial coregistration electroanatomic maps and baseline gated blood pool SPECT imaging allowed us to quantify activation duration time, electromechanical delay, and LVEF for different pacing modes. Even if pacing modes did not significantly modify LVEF or activation duration, they produced alterations in electromechanical delay, with biventricular pacing significantly decreasing the electromechanical delay as measured by surface tracings and endocardial non-contact mapping.

  15. Bachmann's Bundle Pacing not Only Improves Interatrial Conduction but Also Reduces the Need for Ventricular Pacing.

    PubMed

    Sławuta, Agnieszka; Kliś, Magdalena; Skoczyński, Przemysław; Bańkowski, Tomasz; Moszczyńska-Stulin, Joanna; Gajek, Jacek

    2016-01-01

    Patients treated for sick sinus syndrome may have interatrial conduction disorder leading to atrial fibrillation. This study was aimed to assess the influence of the atrial pacing site on interatrial and atrioventricular conduction as well as the percentage of ventricular pacing in patients with sick sinus syndrome implanted with atrioventricular pacemaker. The study population: 96 patients (58 females, 38 males) aged 74.1 ± 11.8 years were divided in two groups: Group 1 (n = 44) with right atrial appendage pacing and group 2 (n = 52) with Bachmann's area pacing. We assessed the differences in atrioventricular conduction in sinus rhythm and atrial 60 and 90 bpm pacing, P-wave duration and percentage of ventricular pacing. No differences in baseline P-wave duration in sinus rhythm between the groups (102.4 ± 17 ms vs. 104.1 ± 26 ms, p = ns.) were noted. Atrial pacing 60 bpm resulted in longer P-wave in group 1 vs. group 2 (138.3 ± 21 vs. 106.1 ± 15 ms, p < 0.01). The differences between atrioventricular conduction time during sinus rhythm and atrial pacing at 60 and 90 bpm were significantly longer in patients with right atrial appendage vs. Bachmann's pacing (44.1 ± 17 vs. 9.2 ± 7 ms p < 0.01 and 69.2 ± 31 vs. 21.4 ± 12 ms p < 0.05, respectively). The percentage of ventricular pacing was higher in group 1 (21 vs. 4%, p < 0.01). Bachmann's bundle pacing decreases interatrial and atrioventricular conduction delay. Moreover, the frequency-dependent atrioventricular conduction lengthening is much less pronounced during Bachmann's bundle pacing. Right atrial appendage pacing in sick sinus syndrome patients promotes a higher percentage of ventricular pacing.

  16. 42 CFR 460.32 - Content and terms of PACE program agreement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Program Agreement § 460.32 Content and terms of PACE program agreement. (a...

  17. 42 CFR 460.90 - PACE benefits under Medicare and Medicaid.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Services § 460.90 PACE benefits under Medicare and Medicaid. If a Medicare...

  18. Development and validation of Aviation Causal Contributors for Error Reporting Systems (ACCERS).

    PubMed

    Baker, David P; Krokos, Kelley J

    2007-04-01

    This investigation sought to develop a reliable and valid classification system for identifying and classifying the underlying causes of pilot errors reported under the Aviation Safety Action Program (ASAP). ASAP is a voluntary safety program that air carriers may establish to study pilot and crew performance on the line. In ASAP programs, similar to the Aviation Safety Reporting System, pilots self-report incidents by filing a short text description of the event. The identification of contributors to errors is critical if organizations are to improve human performance, yet it is difficult for analysts to extract this information from text narratives. A taxonomy was needed that could be used by pilots to classify the causes of errors. After completing a thorough literature review, pilot interviews and a card-sorting task were conducted in Studies 1 and 2 to develop the initial structure of the Aviation Causal Contributors for Event Reporting Systems (ACCERS) taxonomy. The reliability and utility of ACCERS was then tested in studies 3a and 3b by having pilots independently classify the primary and secondary causes of ASAP reports. The results provided initial evidence for the internal and external validity of ACCERS. Pilots were found to demonstrate adequate levels of agreement with respect to their category classifications. ACCERS appears to be a useful system for studying human error captured under pilot ASAP reports. Future work should focus on how ACCERS is organized and whether it can be used or modified to classify human error in ASAP programs for other aviation-related job categories such as dispatchers. Potential applications of this research include systems in which individuals self-report errors and that attempt to extract and classify the causes of those events.

  19. Multi-Leu PACE4 Inhibitor Retention within Cells Is PACE4 Dependent and a Prerequisite for Antiproliferative Activity

    PubMed Central

    Ly, Kévin; Levesque, Christine; Kwiatkowska, Anna; Ait-Mohand, Samia; Desjardins, Roxane; Guérin, Brigitte; Day, Robert

    2015-01-01

    The overexpression as well as the critical implication of the proprotein convertase PACE4 in prostate cancer progression has been previously reported and supported the development of peptide inhibitors. The multi-Leu peptide, a PACE4-specific inhibitor, was further generated and its capability to be uptaken by tumor xenograft was demonstrated with regard to its PACE4 expression status. To investigate whether the uptake of this inhibitor was directly dependent of PACE4 levels, uptake and efflux from cancer cells were evaluated and correlations were established with PACE4 contents on both wild type and PACE4-knockdown cell lines. PACE4-knockdown associated growth deficiencies were established on the knockdown HepG2, Huh7, and HT1080 cells as well as the antiproliferative effects of the multi-Leu peptide supporting the growth capabilities of PACE4 in cancer cells. PMID:26114115

  20. Automated spectral classification and the GAIA project

    NASA Technical Reports Server (NTRS)

    Lasala, Jerry; Kurtz, Michael J.

    1995-01-01

    Two dimensional spectral types for each of the stars observed in the global astrometric interferometer for astrophysics (GAIA) mission would provide additional information for the galactic structure and stellar evolution studies, as well as helping in the identification of unusual objects and populations. The classification of the large quantity generated spectra requires that automated techniques are implemented. Approaches for the automatic classification are reviewed, and a metric-distance method is discussed. In tests, the metric-distance method produced spectral types with mean errors comparable to those of human classifiers working at similar resolution. Data and equipment requirements for an automated classification survey, are discussed. A program of auxiliary observations is proposed to yield spectral types and radial velocities for the GAIA-observed stars.

  1. Speaker normalization and adaptation using second-order connectionist networks.

    PubMed

    Watrous, R L

    1993-01-01

    A method for speaker normalization and adaption using connectionist networks is developed. A speaker-specific linear transformation of observations of the speech signal is computed using second-order network units. Classification is accomplished by a multilayer feedforward network that operates on the normalized speech data. The network is adapted for a new talker by modifying the transformation parameters while leaving the classifier fixed. This is accomplished by backpropagating classification error through the classifier to the second-order transformation units. This method was evaluated for the classification of ten vowels for 76 speakers using the first two formant values of the Peterson-Barney data. The results suggest that rapid speaker adaptation resulting in high classification accuracy can be accomplished by this method.

  2. Errors in imaging patients in the emergency setting

    PubMed Central

    Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955

  3. Errors in imaging patients in the emergency setting.

    PubMed

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.

  4. Influence of automatic frequent pace-timing adjustments on effective left ventricular pacing during cardiac resynchronization therapy.

    PubMed

    Varma, Niraj; Stadler, Robert W; Ghosh, Subham; Kloppe, Axel

    2017-05-01

    Cardiac resynchronization therapy (CRT) requires effective left ventricular (LV) pacing (i.e. sufficient energy and appropriate timing to capture). The AdaptivCRT™ (aCRT) algorithm serves to maintain ventricular fusion during LV or biventricular pacing. This function was tested by comparing the morphological consistency of ventricular depolarizations and percentage effective LV pacing in CRT patients randomized to aCRT vs. echo-optimization. Continuous recordings (≥20 h) of unipolar LV electrograms from aCRT (n = 38) and echo-optimized patients (n = 22) were analysed. Morphological consistency was determined by the correlation coefficient between each beat and a template beat. Effective LV pacing of paced beats was assessed by algorithmic analysis of negative initial EGM deflection in each evoked response. The %CRT pacing delivered, %effective LV pacing (i.e. % of paced beats with effective LV pacing), and overall %effective CRT (i.e. product of %CRT pacing and %effective LV pacing) were compared between aCRT and echo-optimized patients. Demographics were similar between groups. The mean correlation coefficient between individual beats and template was greater for aCRT (0.96 ± 0.03 vs. 0.91 ± 0.13, P = 0.07). Although %CRT pacing was similar for aCRT and echo-optimized (median 97.4 vs. 98.6%, P = 0.14), %effective LV pacing was larger for aCRT [99.6%, (99.1%, 99.9%) vs. 94.3%, (24.3%, 99.8%), P=0.03]. For aCRT vs. echo-optimized groups, the proportions of patients with ≥90% effective LV pacing was 92 vs. 55% (P = 0.002), and with ≥90% effective CRT was 79 vs. 45%, respectively (P = 0.018). AdaptivCRT™ significantly increased effective LV pacing over echo-optimized CRT. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  5. Reliability, Validity, and Classification Accuracy of the DSM-5 Diagnostic Criteria for Gambling Disorder and Comparison to DSM-IV.

    PubMed

    Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C

    2016-09-01

    The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.

  6. Multiple category-lot quality assurance sampling: a new classification system with application to schistosomiasis control.

    PubMed

    Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello

    2012-01-01

    Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.

  7. The presence of English and Spanish dyslexia in the Web

    NASA Astrophysics Data System (ADS)

    Rello, Luz; Baeza-Yates, Ricardo

    2012-09-01

    In this study we present a lower bound of the prevalence of dyslexia in the Web for English and Spanish. On the basis of analysis of corpora written by dyslexic people, we propose a classification of the different kinds of dyslexic errors. A representative data set of dyslexic words is used to calculate this lower bound in web pages containing English and Spanish dyslexic errors. We also present an analysis of dyslexic errors in major Internet domains, social media sites, and throughout English- and Spanish-speaking countries. To show the independence of our estimations from the presence of other kinds of errors, we compare them with the overall lexical quality of the Web and with the error rate of noncorrected corpora. The presence of dyslexic errors in the Web motivates work in web accessibility for dyslexic users.

  8. Nineteen hundred seventy three significant accomplishments. [Landsat satellite data applications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Data collected by the Skylab remote sensing satellites was used to develop applications techniques and to combine automatic data classification with statistical clustering methods. Continuing research was concentrated in the correlation and registration of data products and in the definition of the atmospheric effects on remote sensing. The causes of errors encountered in the automated classification of agricultural data are identified. Other applications in forestry, geography, environmental geology, and land use are discussed.

  9. TEM and Gravity Data for Roosevelt Hot Springs, Utah FORGE Site

    DOE Data Explorer

    Hardwick, Christian; Nash, Greg

    2018-02-05

    This submission includes a gravity data in text format and as a GIS point shapefile and transient electromagnetic (TEM) raw data. Each text file additionally contains location data (UTM Zone 12, NAD83) and elevation (meters) data for that station. The gravity data shapefile was in part downloaded from PACES, University of Texas at El Paso, http://gis.utep.edu/subpages/GMData.html, and in part collected by the Utah Geological Survey (UGS) as part of the DOE GTO supported Utah FORGE geothermal energy project near Milford, Utah. The PACES data were examined and scrubbed to eliminate any questionable data. A 2.67 g/cm^3 reduction density was used for the Bouguer correction. The attribute table column headers for the gravity data shapefile are explained below. There is also metadata attached to the GIS shapefile. name: the individual gravity station name. HAE: height above ellipsoid [meter] NGVD29: vertical datum for geoid [meter] obs: observed gravity ERRG: gravity measurement error [mGal] IZTC: inner zone terrain correction [mGal] OZTC: outer zone terrain correction [mGal] Gfa: free air gravity gSBGA: Bouguer horizontal slab sCBGA: Complete Bouguer anomaly

  10. Conditioning attentional skills: examining the effects of the pace of television editing on children's attention.

    PubMed

    Cooper, N R; Uller, C; Pettifer, J; Stolc, F C

    2009-10-01

    There is increasing concern about the behavioural and cognitive effects of watching television in childhood. Numerous studies have examined the effects of the amount of viewing time; however, to our knowledge, only one study has investigated whether the speed of editing of a programme may have an effect on behaviour. The purpose of the present study was to examine this question using a novel experimental paradigm. School children (aged 4-7 years) were randomly assigned to one of two groups. Each group was presented with either a fast- or slow-edit 3.5-min film of a narrator reading a children's story. Immediately following film presentation, both groups were presented with a continuous test of attention. Performance varied according to experimental group and age. In particular, we found that children's orienting networks and error rates can be affected by a very short exposure to television. Just 3.5 min of watching television can have a differential effect on the viewer depending on the pacing of the film editing. These findings highlight the potential of experimentally manipulating television exposure in children and emphasize the need for more research in this previously under-explored topic.

  11. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  12. Machine Learning of Human Pluripotent Stem Cell-Derived Engineered Cardiac Tissue Contractility for Automated Drug Classification.

    PubMed

    Lee, Eugene K; Tran, David D; Keung, Wendy; Chan, Patrick; Wong, Gabriel; Chan, Camie W; Costa, Kevin D; Li, Ronald A; Khine, Michelle

    2017-11-14

    Accurately predicting cardioactive effects of new molecular entities for therapeutics remains a daunting challenge. Immense research effort has been focused toward creating new screening platforms that utilize human pluripotent stem cell (hPSC)-derived cardiomyocytes and three-dimensional engineered cardiac tissue constructs to better recapitulate human heart function and drug responses. As these new platforms become increasingly sophisticated and high throughput, the drug screens result in larger multidimensional datasets. Improved automated analysis methods must therefore be developed in parallel to fully comprehend the cellular response across a multidimensional parameter space. Here, we describe the use of machine learning to comprehensively analyze 17 functional parameters derived from force readouts of hPSC-derived ventricular cardiac tissue strips (hvCTS) electrically paced at a range of frequencies and exposed to a library of compounds. A generated metric is effective for then determining the cardioactivity of a given drug. Furthermore, we demonstrate a classification model that can automatically predict the mechanistic action of an unknown cardioactive drug. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.

    PubMed

    Robinson, P J

    1997-11-01

    The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.

  14. Influence of Pacing by Periodic Auditory Stimuli on Movement Continuation: Comparison with Self-regulated Periodic Movement

    PubMed Central

    Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi

    2013-01-01

    [Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932

  15. Indications for permanent pacing and pacing mode prescription from 1989 to 2006. Experience of a single academic centre in Northern Greece.

    PubMed

    Styliadis, Ioannis H; Mantziari, Aggeliki P; Gouzoumas, Nikolaos I; Vassilikos, Vasilios P; Paraskevaidis, Stelios A; Mochlas, Sotirios T; Boufidou, Amalia I; Parcharidis, Georgios E

    2008-01-01

    Indications for pacing and pacing mode prescription have changed in the past decades following advances in pacemaker technology. The aim of the present study was to evaluate changes in indications for pacing and pacing modes during the years 1989-2006 in a single academic pacemaker centre in Northern Greece. Archives of permanent pacemaker implantation procedures were studied retrospectively and data from all implants, first or replacements, were retrieved. Data from 2078 procedures were found, 78.7% of which were first implantations. Patients were 54% male with mean age 72.5 years. Main indications for pacing were atrioventricular block (AVB, 45.7%), sick sinus syndrome (SSS, 32.8%), and atrial fibrillation (12.1%). Almost half (48.9%) of the AVB cases were complete AVB, while the most common types of SSS were tachy-brady syndrome (44.1%) and asystole (27.1%). Rare indications for pacing were carotid sinus syndrome (5.0%), heart failure (3.3%) and hypertrophic obstructive cardiomyopathy (1.0%). The two most frequently used pacing modes were VVI (38.5%) and DDD (25.8%). However, pacing modes have changed greatly over the years, with a marked increase in dual-chamber pacing after 1997 and a preference for rate responsive units after 2002. Biventricular systems were also used in selected patients with heart failure from 2002 on. Indications for pacing and pacing mode prescription in our centre are similar to other pacemaker registries and reflect the global trend in pacing for mimicking the physiological activity of the heart and for addressing problems other than symptomatic bradycardia.

  16. A real-time diagnostic and performance monitor for UNIX. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dong, Hongchao

    1992-01-01

    There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.

  17. A classification on human factor accident/incident of China civil aviation in recent twelve years.

    PubMed

    Luo, Xiao-li

    2004-10-01

    To study human factor accident/incident occurred during 1990-2001 using new classification standard. The human factor accident/incident classification standard is developed on the basis of Reason's Model, combining with CAAC's traditional classifying method, and applied to the classified statistical analysis for 361 flying incidents and 35 flight accidents of China civil aviation, which is induced by human factors and occurred from 1990 to 2001. 1) the incident percentage of taxi and cruise is higher than that of takeoff, climb and descent. 2) The dominating type of flight incidents is diverging of runway, overrunning, near-miss, tail/wingtip/engine strike and ground obstacle impacting. 3) The top three accidents are out of control caused by crew, mountain collision and over runway. 4) Crew's basic operating skill is lower than what we imagined, the mostly representation is poor correcting ability when flight error happened. 5) Crew errors can be represented by incorrect control, regulation and procedure violation, disorientation and diverging percentage of correct flight level. The poor CRM skill is the dominant factor impacting China civil aviation safety, this result has a coincidence with previous study, but there is much difference and distinct characteristic in top incident phase, the type of crew error and behavior performance compared with that of advanced countries. We should strengthen CRM training for all of pilots aiming at the Chinese pilot behavior characteristic in order to improve the safety level of China civil aviation.

  18. Functional characteristics of left ventricular synchronization via right ventricular outflow-tract pacing detected by two-dimensional strain echocardiography.

    PubMed

    Hirayama, Yasutaka; Kawamura, Yuichiro; Sato, Nobuyuki; Saito, Tatsuya; Tanaka, Hideichi; Saijo, Yasuaki; Kikuchi, Kenjiro; Ohori, Katsumi; Hasebe, Naoyuki

    2017-02-01

    Recently, due to the detrimental effects on the ventricular function associated with right ventricular apical (RVA) pacing, right ventricular septal (RVS) pacing has become the preferred pacing method. However, the term RVS pacing refers to both right ventricular outflow-tract (RVOT) and mid-septal (RVMS) pacing, leading to a misinterpretation of the results of clinical studies. The purpose of this study, therefore, was to elucidate the functional differences of RVA, RVOT, and RVMS pacing in patients with atrioventricular block. We compared the QRS duration, global longitudinal strain (GLS), and left ventricular (LV) synchronization parameters at the three pacing sites in 47 patients. The peak systolic strain (PSS) time delay between the earliest and latest segments among the 18 LV segments and standard deviation (SD) of the time to the PSS were also calculated for the 18 LV segments at each pacing site using two-dimensional (2D) strain echocardiography. RVMS pacing was associated with a significantly shorter QRS duration compared with RVA and RVOT pacing (154.4±21.4 vs 186.5±19.9 and 171.1±21.5 ms, P <0.001). In contrast, RVOT pacing revealed a greater GLS (-14.69±4.92 vs -13.12±4.76 and -13.51±4.81%, P <0.001), shorter PSS time delay between the earliest and latest segments (236.0±87.9 vs 271.3±102.9 and 281.9±126.6%, P =0.007), and shorter SD of the time to the PSS (70.8±23.8 vs 82.7±30.8 and 81.5±33.7 ms, P =0.002) compared with RVA and RVMS pacing. These results suggest that the functional characteristics of RVOT pacing may be a more optimal pacing site than RVMS, regardless of the pacing QRS duration, in patients with atrioventricular conduction disorders.

  19. Content-based multiple bitstream image transmission over noisy channels.

    PubMed

    Cao, Lei; Chen, Chang Wen

    2002-01-01

    In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.

  20. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    PubMed

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  1. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study.

    PubMed

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2015-04-01

    To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  3. Underwater target classification using wavelet packets and neural networks.

    PubMed

    Azimi-Sadjadi, M R; Yao, D; Huang, Q; Dobeck, G J

    2000-01-01

    In this paper, a new subband-based classification scheme is developed for classifying underwater mines and mine-like targets from the acoustic backscattered signals. The system consists of a feature extractor using wavelet packets in conjunction with linear predictive coding (LPC), a feature selection scheme, and a backpropagation neural-network classifier. The data set used for this study consists of the backscattered signals from six different objects: two mine-like targets and four nontargets for several aspect angles. Simulation results on ten different noisy realizations and for signal-to-noise ratio (SNR) of 12 dB are presented. The receiver operating characteristic (ROC) curve of the classifier generated based on these results demonstrated excellent classification performance of the system. The generalization ability of the trained network was demonstrated by computing the error and classification rate statistics on a large data set. A multiaspect fusion scheme was also adopted in order to further improve the classification performance.

  4. The effect of the atmosphere on the classification of satellite observations to identify surface features

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Bahethi, O. P.; Al-Abbas, A. H.

    1977-01-01

    The effect of differences in atmospheric turbidity on the classification of Landsat 1 observations of a rural scene is presented. The observations are classified by an unsupervised clustering technique. These clusters serve as a training set for use of a maximum-likelihood algorithm. The measured radiances in each of the four spectral bands are then changed by amounts measured by Landsat 1. These changes can be associated with a decrease in atmospheric turbidity by a factor of 1.3. The classification of 22% of the pixels changes as a result of the modification. The modified observations are then reclassified as an independent set. Only 3% of the pixels have a different classification than the unmodified set. Hence, if classification errors of rural areas are not to exceed 15%, a new training set has to be developed whenever the difference in turbidity between the training and test sets reaches unity.

  5. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  6. Stretchy binary classification.

    PubMed

    Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo

    2018-01-01

    In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. [Effects of residents' care needs classification (and misclassification) in nursing homes: the example of SOSIA classification].

    PubMed

    Nebuloni, G; Di Giulio, P; Gregori, D; Sandonà, P; Berchialla, P; Foltran, F; Renga, G

    2011-01-01

    Since 2003, the Lombardy region has introduced a case-mix reimbursement system for nursing homes based on the SOSIA form which classifies residents into eight classes of frailty. In the present study the agreement between SOSIA classification and other well documented instruments, including Barthel Index, Mini Mental State Examination and Clinical Dementia Rating Scale is evaluated in 100 nursing home residents. Only 50% of residents with severe dementia have been recognized as seriously impaired when assessed with SOSIA form; since misclassification errors underestimate residents' care needs, they determine an insufficient reimbursement limiting nursing home possibility to offer care appropriate for the case-mix.

  8. Effect of filtration of signals of brain activity on quality of recognition of brain activity patterns using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander E.; Frolov, Nikita S.; Musatov, Vyachaslav Yu.

    2018-02-01

    In present work we studied features of the human brain states classification, corresponding to the real movements of hands and legs. For this purpose we used supervised learning algorithm based on feed-forward artificial neural networks (ANNs) with error back-propagation along with the support vector machine (SVM) method. We compared the quality of operator movements classification by means of EEG signals obtained experimentally in the absence of preliminary processing and after filtration in different ranges up to 25 Hz. It was shown that low-frequency filtering of multichannel EEG data significantly improved accuracy of operator movements classification.

  9. Which strategy for a protein crystallization project?

    NASA Technical Reports Server (NTRS)

    Kundrot, C. E.

    2004-01-01

    The three-dimensional, atomic-resolution protein structures produced by X-ray crystallography over the past 50+ years have led to tremendous chemical understanding of fundamental biochemical processes. The pace of discovery in protein crystallography has increased greatly with advances in molecular biology, crystallization techniques, cryocrystallography, area detectors, synchrotrons and computing. While the methods used to produce single, well-ordered crystals have also evolved over the years in response to increased understanding and advancing technology, crystallization strategies continue to be rooted in trial-and-error approaches. This review summarizes the current approaches in protein crystallization and surveys the first results to emerge from the structural genomics efforts.

  10. Which Strategy for a Protein Crystallization Project?

    NASA Technical Reports Server (NTRS)

    Kundrot, Craig E.

    2003-01-01

    The three-dimensional, atomic-resolution protein structures produced by X-ray crystallography over the past 50+ years have led to tremendous chemical understanding of fundamental biochemical processes. The pace of discovery in protein crystallography has increased greatly with advances in molecular biology, crystallization techniques, cryo-crystallography, area detectors, synchrotrons and computing. While the methods used to produce single, well-ordered crystals have also evolved over the years in response to increased understanding and advancing technology, crystallization strategies continue to be rooted in trial-and-error approaches. This review summarizes the current approaches in protein crystallization and surveys the first results to emerge from the structural genomics efforts.

  11. GDF v2.0, an enhanced version of GDF

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos

    2007-12-01

    An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.

  12. Ground truth management system to support multispectral scanner /MSS/ digital analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.; Ungar, S. G.

    1977-01-01

    A computerized geographic information system for management of ground truth has been designed and implemented to relate MSS classification results to in situ observations. The ground truth system transforms, generalizes and rectifies ground observations to conform to the pixel size and shape of high resolution MSS aircraft data. These observations can then be aggregated for comparison to lower resolution sensor data. Construction of a digital ground truth array allows direct pixel by pixel comparison between classification results of MSS data and ground truth. By making comparisons, analysts can identify spatial distribution of error within the MSS data as well as usual figures of merit for the classifications. Use of the ground truth system permits investigators to compare a variety of environmental or anthropogenic data, such as soil color or tillage patterns, with classification results and allows direct inclusion of such data into classification operations. To illustrate the system, examples from classification of simulated Thematic Mapper data for agricultural test sites in North Dakota and Kansas are provided.

  13. [Role of cyclic adenosine monophosphate response element binding protein in ventricular pacing induced cardiac electrical remodeling in a canine model].

    PubMed

    Chen, Xuesi; Chen, Xingxing; Cheng, Junhua; Hong, Jun; Zheng, Cheng; Zhao, Jinglin; Li, Jin; Lin, Jiafeng

    2015-04-01

    This project is designed to explore the potential role of cyclic adenosine monophosphate (cAMP) response element binding protein (CREB) in cardiac electrical remodeling induced by pacing at different ventricular positions in dogs. An animal model by implanting the pacemakers in beagles was established. According to the different pacing positions, the animals were divided into 4 groups:conditional control group (n=6), left ventricle pacing group (n=6), right ventricle pacing group (n=6) and bi-ventricle pacing group (n=6). Cardiac and electrical remodeling were observed by echocardiography, electrocardiogram and plasma BNP. Myocardial pathology and protein expression of extracellular regulated protein kinases1/2 (ERK1/2), P38 mitogen activated protein kinases (P38 MAPK) and CREB were examined at 4 weeks post pacing. Cardiac structure and plasma BNP level were similar among 4 groups (all P>0.05). Electrocardiogram derived Tp-Te interval was significantly prolonged post pacing (92±11, 91±10, and 79±13 ms vs. 60±12 ms), and the Tp-Te interval in bi-ventricle pacing group was shorter than in left or right ventricle pacing group (P < 0.05). Western blot results showed that the expression of p-ERK1/2 in left ventricular myocardium of left ventricle pacing group, right ventricular myocardium of right ventricle pacing group and bi-ventricular myocardium of bi-ventricle pacing group was 2.7±0.4, 2.4±0.2, 1.7±0.1 and 1.9±0.2, respectively, the expression of p-P38 MAPK was 1.9±0.3, 1.7±0.2, 0.8±0.1 and 1.1±0.1, respectively, and the expression of p-CREB was 2.1±0.2, 2.0±0.2, 2.7±0.4 and 2.6±0.3, respectively. The p-ERK1/2 and p-P38 MAPK expression of bi-ventricle pacing group was lower,but the p-CREB expression was higher compared to the other pacing groups (P < 0.05). Ventricular pacing could induce electrical remodeling evidenced by prolonged Tp-Te interval and increased phosphorylation of ERK1/2 and p38 MAPK and reduced phosphorylation of CREB. Compared with single ventricle pacing, bi-ventricle pacing could attenuate electrical remodeling in this model.

  14. Global terrain classification using Multiple-Error-Removed Improved-Terrain (MERIT) to address susceptibility of landslides and other geohazards

    NASA Astrophysics Data System (ADS)

    Iwahashi, J.; Yamazaki, D.; Matsuoka, M.; Thamarux, P.; Herrick, J.; Yong, A.; Mital, U.

    2017-12-01

    A seamless model of landform classifications with regional accuracy will be a powerful platform for geophysical studies that forecast geologic hazards. Spatial variability as a function of landform on a global scale was captured in the automated classifications of Iwahashi and Pike (2007) and additional developments are presented here that incorporate more accurate depictions using higher-resolution elevation data than the original 1-km scale Shuttle Radar Topography Mission digital elevation model (DEM). We create polygon-based terrain classifications globally by using the 280-m DEM interpolated from the Multi-Error-Removed Improved-Terrain DEM (MERIT; Yamazaki et al., 2017). The multi-scale pixel-image analysis method, known as Multi-resolution Segmentation (Baatz and Schäpe, 2000), is first used to classify the terrains based on geometric signatures (slope and local convexity) calculated from the 280-m DEM. Next, we apply the machine learning method of "k-means clustering" to prepare the polygon-based classification at the globe-scale using slope, local convexity and surface texture. We then group the divisions with similar properties by hierarchical clustering and other statistical analyses using geological and geomorphological data of the area where landslides and earthquakes are frequent (e.g. Japan and California). We find the 280-m DEM resolution is only partially sufficient for classifying plains. We nevertheless observe that the categories correspond to reported landslide and liquefaction features at the global scale, suggesting that our model is an appropriate platform to forecast ground failure. To predict seismic amplification, we estimate site conditions using the time-averaged shear-wave velocity in the upper 30-m (VS30) measurements compiled by Yong et al. (2016) and the terrain model developed by Yong (2016; Y16). We plan to test our method on finer resolution DEMs and report our findings to obtain a more globally consistent terrain model as there are known errors in DEM derivatives at higher-resolutions. We expect the improvement in DEM resolution (4 times greater detail) and the combination of regional and global coverage will yield a consistent dataset of polygons that have the potential to improve relations to the Y16 estimates significantly.

  15. Disregarding population specificity: its influence on the sex assessment methods from the tibia.

    PubMed

    Kotěrová, Anežka; Velemínská, Jana; Dupej, Ján; Brzobohatá, Hana; Pilný, Aleš; Brůžek, Jaroslav

    2017-01-01

    Forensic anthropology has developed classification techniques for sex estimation of unknown skeletal remains, for example population-specific discriminant function analyses. These methods were designed for populations that lived mostly in the late nineteenth and twentieth centuries. Their level of reliability or misclassification is important for practical use in today's forensic practice; it is, however, unknown. We addressed the question of what the likelihood of errors would be if population specificity of discriminant functions of the tibia were disregarded. Moreover, five classification functions in a Czech sample were proposed (accuracies 82.1-87.5 %, sex bias ranged from -1.3 to -5.4 %). We measured ten variables traditionally used for sex assessment of the tibia on a sample of 30 male and 26 female models from recent Czech population. To estimate the classification accuracy and error (misclassification) rates ignoring population specificity, we selected published classification functions of tibia for the Portuguese, south European, and the North American populations. These functions were applied on the dimensions of the Czech population. Comparing the classification success of the reference and the tested Czech sample showed that females from Czech population were significantly overestimated and mostly misclassified as males. Overall accuracy of sex assessment significantly decreased (53.6-69.7 %), sex bias -29.4-100 %, which is most probably caused by secular trend and the generally high variability of body size. Results indicate that the discriminant functions, developed for skeletal series representing geographically and chronologically diverse populations, are not applicable in current forensic investigations. Finally, implications and recommendations for future research are discussed.

  16. Decision Making for Borderline Cases in Pass/Fail Clinical Anatomy Courses: The Practical Value of the Standard Error of Measurement and Likelihood Ratio in a Diagnostic Test

    ERIC Educational Resources Information Center

    Severo, Milton; Silva-Pereira, Fernanda; Ferreira, Maria Amelia

    2013-01-01

    Several studies have shown that the standard error of measurement (SEM) can be used as an additional “safety net” to reduce the frequency of false-positive or false-negative student grading classifications. Practical examinations in clinical anatomy are often used as diagnostic tests to admit students to course final examinations. The aim of this…

  17. Altering Pace Control and Pace Regulation: Attentional Focus Effects during Running.

    PubMed

    Brick, Noel E; Campbell, Mark J; Metcalfe, Richard S; Mair, Jacqueline L; Macintyre, Tadhg E

    2016-05-01

    To date, there are no published studies directly comparing self-controlled (SC) and externally controlled (EC) pace endurance tasks. However, previous research suggests pace control may impact on cognitive strategy use and effort perceptions. The primary aim of this study was to investigate the effects of manipulating perception of pace control on attentional focus, physiological, and psychological outcomes during running. The secondary aim was to determine the reproducibility of self-paced running performance when regulated by effort perceptions. Twenty experienced endurance runners completed four 3-km time trials on a treadmill. Subjects completed two SC pace trials, one perceived exertion clamped (PE) trial, and one EC pace time trial. PE and EC were completed in a counterbalanced order. Pacing strategy for EC and perceived exertion instructions for PE replicated the subjects' fastest SC time trial. Subjects reported a greater focus on cognitive strategies such as relaxing and optimizing running action during EC than during SC. The mean HR was 2% lower during EC than that during SC despite an identical pacing strategy. Perceived exertion did not differ between the three conditions. However, increased internal sensory monitoring coincided with elevated effort perceptions in some subjects during EC and a 10% slower completion time for PE (13.0 ± 1.6 min) than that for SC (11.8 ± 1.2 min). Altering pace control and pace regulation impacted on attentional focus. External control over pacing may facilitate performance, particularly when runners engage attentional strategies conducive to improved running efficiency. However, regulating pace based on effort perceptions alone may result in excessive monitoring of bodily sensations and a slower running speed. Accordingly, attentional focus interventions may prove beneficial for some athletes to adopt task-appropriate attentional strategies to optimize performance.

  18. Accelerated graft dysfunction in heart transplant patients with persistent atrioventricular conduction block.

    PubMed

    Lee, William; Tay, Andre; Walker, Bruce D; Kuchar, Dennis L; Hayward, Christopher S; Spratt, Phillip; Subbiah, Rajesh N

    2016-12-01

    Bradyarrhythmia following heart transplantation is common-∼7.5-24% of patients require permanent pacemaker (PPM) implantation. While overall mortality is similar to their non-paced counterparts, the effects of chronic right ventricular pacing (CRVP) in heart transplant patients have not been studied. We aim to examine the effects of CRVP on heart failure and mortality in heart transplant patients. Records of heart transplant recipients requiring PPM at St Vincent's Hospital, Sydney, Australia between January 1990 and January 2015 were examined. Patient's without a right ventricular (RV) pacing lead or a follow-up time of <1 year were excluded. Patients with pre-existing abnormal left ventricular function (<50%) were analysed separately. Patients were grouped by pacing dependence (100% pacing dependent vs. non-pacing dependent). The primary endpoint was clinical or echocardiographic heart failure (<35%) in the first 5 years post-PPM. Thirty-three of 709 heart transplant recipients were studied. Two patients had complete RV pacing dependence, and the remaining 31 patients had varying degrees of pacing requirement, with an underlying ventricular escape rhythm. The primary endpoint occurred significantly more in the pacing-dependent group; 2 (100%) compared with 2 (6%) of the non pacing dependent group (P < 0.0001 by log-rank analysis, HR = 24.58). Non-pacing-dependent patients had reversible causes for heart failure, unrelated to pacing. In comparison, there was no other cause of heart failure in the pacing-dependent group. Permanent atrioventricular block is rare in the heart transplant population. We have demonstrated CRVP as a potential cause of accelerated graft failure in pacing-dependent heart transplant patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  19. Prospective randomized study to assess the efficacy of site and rate of atrial pacing on long-term progression of atrial fibrillation in sick sinus syndrome: Septal Pacing for Atrial Fibrillation Suppression Evaluation (SAFE) Study.

    PubMed

    Lau, Chu-Pak; Tachapong, Ngarmukos; Wang, Chun-Chieh; Wang, Jing-Feng; Abe, Haruhiko; Kong, Chi-Woon; Liew, Reginald; Shin, Dong-Gu; Padeletti, Luigi; Kim, You-Ho; Omar, Razali; Jirarojanakorn, Kreingkrai; Kim, Yoon-Nyun; Chen, Mien-Cheng; Sriratanasathavorn, Charn; Munawar, Muhammad; Kam, Ruth; Chen, Jan-Yow; Cho, Yong-Keun; Li, Yi-Gang; Wu, Shu-Lin; Bailleul, Christophe; Tse, Hung-Fat

    2013-08-13

    Atrial-based pacing is associated with lower risk of atrial fibrillation (AF) in sick sinus syndrome compared with ventricular pacing; nevertheless, the impact of site and rate of atrial pacing on progression of AF remains unclear. We evaluated whether long-term atrial pacing at the right atrial (RA) appendage versus the low RA septum with (ON) or without (OFF) a continuous atrial overdrive pacing algorithm can prevent the development of persistent AF. We randomized 385 patients with paroxysmal AF and sick sinus syndrome in whom a pacemaker was indicated to pacing at RA appendage ON (n=98), RA appendage OFF (n=99), RA septum ON (n=92), or RA septum OFF (n=96). The primary outcome was the occurrence of persistent AF (AF documented at least 7 days apart or need for cardioversion). Demographic data were homogeneous across both pacing site (RA appendage/RA septum) and atrial overdrive pacing (ON/OFF). After a mean follow-up of 3.1 years, persistent AF occurred in 99 patients (25.8%; annual rate of persistent AF, 8.3%). Alternative site pacing at the RA septum versus conventional RA appendage (hazard ratio=1.18; 95% confidence interval, 0.79-1.75; P=0.65) or continuous atrial overdrive pacing ON versus OFF (hazard ratio=1.17; 95% confidence interval, 0.79-1.74; P=0.69) did not prevent the development of persistent AF. In patients with paroxysmal AF and sick sinus syndrome requiring pacemaker implantation, an alternative atrial pacing site at the RA septum or continuous atrial overdrive pacing did not prevent the development of persistent AF. URL: http://www.clinicaltrials.gov. UNIQUE IDENTIFIER: NCT00419640.

  20. Constant DI pacing suppresses cardiac alternans formation in numerical cable models

    NASA Astrophysics Data System (ADS)

    Zlochiver, S.; Johnson, C.; Tolkacheva, E. G.

    2017-09-01

    Cardiac repolarization alternans describe the sequential alternation of the action potential duration (APD) and can develop during rapid pacing. In the ventricles, such alternans may rapidly turn into life risking arrhythmias under conditions of spatial heterogeneity. Thus, suppression of alternans by artificial pacing protocols, or alternans control, has been the subject of numerous theoretical, numerical, and experimental studies. Yet, previous attempts that were inspired by chaos control theories were successful only for a short spatial extent (<2 cm) from the pacing electrode. Previously, we demonstrated in a single cell model that pacing with a constant diastolic interval (DI) can suppress the formation of alternans at high rates of activation. We attributed this effect to the elimination of feedback between the pacing cycle length and the last APD, effectively preventing restitution-dependent alternans from developing. Here, we extend this idea into cable models to study the extent by which constant DI pacing can control alternans during wave propagation conditions. Constant DI pacing was applied to ventricular cable models of up to 5 cm, using human kinetics. Our results show that constant DI pacing significantly shifts the onset of both cardiac alternans and conduction blocks to higher pacing rates in comparison to pacing with constant cycle length. We also demonstrate that constant DI pacing reduces the propensity of spatially discordant alternans, a precursor of wavebreaks. We finally found that the protective effect of constant DI pacing is stronger for increased electrotonic coupling along the fiber in the sense that the onset of alternans is further shifted to higher activation rates. Overall, these results support the potential clinical applicability of such type of pacing in improving protocols of implanted pacemakers, in order to reduce the risk of life-threatening arrhythmias. Future research should be conducted in order to experimentally validate these promising results.

  1. The Molecular Classification of Medulloblastoma: Driving the next generation clinical trials

    PubMed Central

    Leary, Sarah E. S.; Olson, James M.

    2012-01-01

    Purpose of Review Most children diagnosed with cancer today are expected to be cured. Medulloblastoma, the most common pediatric malignant brain tumor, is an example of a disease that has benefitted from advances in diagnostic imaging, surgical techniques, radiation therapy and combination chemotherapy over the past decades. An incurable disease 50 years ago, approximately 70% of children with medulloblastoma are now cured of their disease. However, the pace of increasing the cure rate has slowed over the past two decades, and we have likely reached the maximal benefit that can be achieved with cytotoxic therapy and clinical risk stratification. Long-term toxicity of therapy also remains significant. To increase cure rates and decrease long-term toxicity, there is great interest in incorporating biologic “targeted” therapy into treatment of medulloblastoma, but this will require a paradigm shift in how we classify and study disease. Recent Findings Using genome-based high-throughput analytic techniques, several groups have independently reported methods of molecular classification of medulloblastoma within the past year. This has resulted in a working consensus to view medulloblastoma as four molecular subtypes including WNT pathway subtype, SHH pathway subtype, and two less well-defined subtypes, Group C and Group D. Summary Novel classification and risk stratification based on biologic subtypes of disease will form the basis of further study in medulloblastoma, and identify specific subtypes which warrant greater research focus. PMID:22189395

  2. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  3. 42 CFR 460.90 - PACE benefits under Medicare and Medicaid.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false PACE benefits under Medicare and Medicaid. 460.90 Section 460.90 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... FOR THE ELDERLY (PACE) PACE Services § 460.90 PACE benefits under Medicare and Medicaid. If a Medicare...

  4. Encouraging spontaneous synchronisation with D-Jogger, an adaptive music player that aligns movement and music.

    PubMed

    Moens, Bart; Muller, Chris; van Noorden, Leon; Franěk, Marek; Celie, Bert; Boone, Jan; Bourgois, Jan; Leman, Marc

    2014-01-01

    In this study we explore how music can entrain human walkers to synchronise to the musical beat without being instructed to do so. For this, we use an interactive music player, called D-Jogger, that senses the user's walking tempo and phase. D-Jogger aligns the music by manipulating the timing difference between beats and footfalls. Experiments are reported that led to the development and optimisation of four alignment strategies. The first strategy matched the music's tempo continuously to the runner's pace. The second strategy matched the music's tempo at the beginning of a song to the runner's pace, keeping the tempo constant for the remainder of the song. The third alignment starts a song in perfect phase synchrony and continues to adjust the tempo to match the runner's pace. The fourth and last strategy additionally adjusts the phase of the music so each beat matches a footfall. The first two strategies resulted in a minor increase of steps in phase synchrony with the main beat when compared to a random playlist, the last two strategies resulted in a strong increase in synchronised steps. These results may be explained in terms of phase-error correction mechanisms and motor prediction schemes. Finding the phase-lock is difficult due to fluctuations in the interaction, whereas strategies that automatically align the phase between movement and music solve the problem of finding the phase-locking. Moreover, the data show that once the phase-lock is found, alignment can be easily maintained, suggesting that less entrainment effort is needed to keep the phase-lock, than to find the phase-lock. The different alignment strategies of D-Jogger can be applied in different domains such as sports, physical rehabilitation and assistive technologies for movement performance.

  5. Developing large-scale forcing data for single-column and cloud-resolving models from the Mixed-Phase Arctic Cloud Experiment

    DOE PAGES

    Xie, Shaocheng; Klein, Stephen A.; Zhang, Minghua; ...

    2006-10-05

    [1] This study represents an effort to develop Single-Column Model (SCM) and Cloud-Resolving Model large-scale forcing data from a sounding array in the high latitudes. An objective variational analysis approach is used to process data collected from the Atmospheric Radiation Measurement Program (ARM) Mixed-Phase Arctic Cloud Experiment (M-PACE), which was conducted over the North Slope of Alaska in October 2004. In this method the observed surface and top of atmosphere measurements are used as constraints to adjust the sounding data from M-PACE in order to conserve column-integrated mass, heat, moisture, and momentum. Several important technical and scientific issues related tomore » the data analysis are discussed. It is shown that the analyzed data reasonably describe the dynamic and thermodynamic features of the Arctic cloud systems observed during M-PACE. Uncertainties in the analyzed forcing fields are roughly estimated by examining the sensitivity of those fields to uncertainties in the upper-air data and surface constraints that are used in the analysis. Impacts of the uncertainties in the analyzed forcing data on SCM simulations are discussed. Results from the SCM tests indicate that the bulk features of the observed Arctic cloud systems can be captured qualitatively well using the forcing data derived in this study, and major model errors can be detected despite the uncertainties that exist in the forcing data as illustrated by the sensitivity tests. Lastly, the possibility of using the European Center for Medium-Range Weather Forecasts analysis data to derive the large-scale forcing over the Arctic region is explored.« less

  6. Encouraging Spontaneous Synchronisation with D-Jogger, an Adaptive Music Player That Aligns Movement and Music

    PubMed Central

    Moens, Bart; Muller, Chris; van Noorden, Leon; Franěk, Marek; Celie, Bert; Boone, Jan; Bourgois, Jan; Leman, Marc

    2014-01-01

    In this study we explore how music can entrain human walkers to synchronise to the musical beat without being instructed to do so. For this, we use an interactive music player, called D-Jogger, that senses the user's walking tempo and phase. D-Jogger aligns the music by manipulating the timing difference between beats and footfalls. Experiments are reported that led to the development and optimisation of four alignment strategies. The first strategy matched the music's tempo continuously to the runner's pace. The second strategy matched the music's tempo at the beginning of a song to the runner's pace, keeping the tempo constant for the remainder of the song. The third alignment starts a song in perfect phase synchrony and continues to adjust the tempo to match the runner's pace. The fourth and last strategy additionally adjusts the phase of the music so each beat matches a footfall. The first two strategies resulted in a minor increase of steps in phase synchrony with the main beat when compared to a random playlist, the last two strategies resulted in a strong increase in synchronised steps. These results may be explained in terms of phase-error correction mechanisms and motor prediction schemes. Finding the phase-lock is difficult due to fluctuations in the interaction, whereas strategies that automatically align the phase between movement and music solve the problem of finding the phase-locking. Moreover, the data show that once the phase-lock is found, alignment can be easily maintained, suggesting that less entrainment effort is needed to keep the phase-lock, than to find the phase-lock. The different alignment strategies of D-Jogger can be applied in different domains such as sports, physical rehabilitation and assistive technologies for movement performance. PMID:25489742

  7. New approach for T-wave peak detection and T-wave end location in 12-lead paced ECG signals based on a mathematical model.

    PubMed

    Madeiro, João P V; Nicolson, William B; Cortez, Paulo C; Marques, João A L; Vázquez-Seisdedos, Carlos R; Elangovan, Narmadha; Ng, G Andre; Schlindwein, Fernando S

    2013-08-01

    This paper presents an innovative approach for T-wave peak detection and subsequent T-wave end location in 12-lead paced ECG signals based on a mathematical model of a skewed Gaussian function. Following the stage of QRS segmentation, we establish search windows using a number of the earliest intervals between each QRS offset and subsequent QRS onset. Then, we compute a template based on a Gaussian-function, modified by a mathematical procedure to insert asymmetry, which models the T-wave. Cross-correlation and an approach based on the computation of Trapezium's area are used to locate, respectively, the peak and end point of each T-wave throughout the whole raw ECG signal. For evaluating purposes, we used a database of high resolution 12-lead paced ECG signals, recorded from patients with ischaemic cardiomyopathy (ICM) in the University Hospitals of Leicester NHS Trust, UK, and the well-known QT database. The average T-wave detection rates, sensitivity and positive predictivity, were both equal to 99.12%, for the first database, and, respectively, equal to 99.32% and 99.47%, for QT database. The average time errors computed for T-wave peak and T-wave end locations were, respectively, -0.38±7.12 ms and -3.70±15.46 ms, for the first database, and 1.40±8.99 ms and 2.83±15.27 ms, for QT database. The results demonstrate the accuracy, consistency and robustness of the proposed method for a wide variety of T-wave morphologies studied. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Reducing error and improving efficiency during vascular interventional radiology: implementation of a preprocedural team rehearsal.

    PubMed

    Morbi, Abigail H M; Hamady, Mohamad S; Riga, Celia V; Kashef, Elika; Pearch, Ben J; Vincent, Charles; Moorthy, Krishna; Vats, Amit; Cheshire, Nicholas J W; Bicknell, Colin D

    2012-08-01

    To determine the type and frequency of errors during vascular interventional radiology (VIR) and design and implement an intervention to reduce error and improve efficiency in this setting. Ethical guidance was sought from the Research Services Department at Imperial College London. Informed consent was not obtained. Field notes were recorded during 55 VIR procedures by a single observer. Two blinded assessors identified failures from field notes and categorized them into one or more errors by using a 22-part classification system. The potential to cause harm, disruption to procedural flow, and preventability of each failure was determined. A preprocedural team rehearsal (PPTR) was then designed and implemented to target frequent preventable potential failures. Thirty-three procedures were observed subsequently to determine the efficacy of the PPTR. Nonparametric statistical analysis was used to determine the effect of intervention on potential failure rates, potential to cause harm and procedural flow disruption scores (Mann-Whitney U test), and number of preventable failures (Fisher exact test). Before intervention, 1197 potential failures were recorded, of which 54.6% were preventable. A total of 2040 errors were deemed to have occurred to produce these failures. Planning error (19.7%), staff absence (16.2%), equipment unavailability (12.2%), communication error (11.2%), and lack of safety consciousness (6.1%) were the most frequent errors, accounting for 65.4% of the total. After intervention, 352 potential failures were recorded. Classification resulted in 477 errors. Preventable failures decreased from 54.6% to 27.3% (P < .001) with implementation of PPTR. Potential failure rates per hour decreased from 18.8 to 9.2 (P < .001), with no increase in potential to cause harm or procedural flow disruption per failure. Failures during VIR procedures are largely because of ineffective planning, communication error, and equipment difficulties, rather than a result of technical or patient-related issues. Many of these potential failures are preventable. A PPTR is an effective means of targeting frequent preventable failures, reducing procedural delays and improving patient safety.

  9. Bug Distribution and Statistical Pattern Classification.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    1987-01-01

    The rule space model permits measurement of cognitive skill acquisition and error diagnosis. Further discussion introduces Bayesian hypothesis testing and bug distribution. An illustration involves an artificial intelligence approach to testing fractions and arithmetic. (Author/GDC)

  10. Recommendations for reducing ambiguity in written procedures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.

    Previous studies in the nuclear weapons complex have shown that ambiguous work instructions (WIs) and operating procedures (OPs) can lead to human error, which is a major cause for concern. This report outlines some of the sources of ambiguity in written English and describes three recommendations for reducing ambiguity in WIs and OPs. The recommendations are based on commonly used research techniques in the fields of linguistics and cognitive psychology. The first recommendation is to gather empirical data that can be used to improve the recommended word lists that are provided to technical writers. The second recommendation is to havemore » a review in which new WIs and OPs and checked for ambiguities and clarity. The third recommendation is to use self-paced reading time studies to identify any remaining ambiguities before the new WIs and OPs are put into use. If these three steps are followed for new WIs and OPs, the likelihood of human errors related to ambiguity could be greatly reduced.« less

  11. Can You Multitask? Evidence and Limitations of Task Switching and Multitasking in Emergency Medicine.

    PubMed

    Skaugset, L Melissa; Farrell, Susan; Carney, Michele; Wolff, Margaret; Santen, Sally A; Perry, Marcia; Cico, Stephen John

    2016-08-01

    Emergency physicians work in a fast-paced environment that is characterized by frequent interruptions and the expectation that they will perform multiple tasks efficiently and without error while maintaining oversight of the entire emergency department. However, there is a lack of definition and understanding of the behaviors that constitute effective task switching and multitasking, as well as how to improve these skills. This article reviews the literature on task switching and multitasking in a variety of disciplines-including cognitive science, human factors engineering, business, and medicine-to define and describe the successful performance of task switching and multitasking in emergency medicine. Multitasking, defined as the performance of two tasks simultaneously, is not possible except when behaviors become completely automatic; instead, physicians rapidly switch between small tasks. This task switching causes disruption in the primary task and may contribute to error. A framework is described to enhance the understanding and practice of these behaviors. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  12. Dynamic engagement of cognitive control modulates recovery from misinterpretation during real-time language processing

    PubMed Central

    Hsu, Nina S.; Novick, Jared M.

    2016-01-01

    Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes though, initial interpretations turn out wrong. How do listeners revise misinterpretations of language input moment-by-moment, to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry, and then initiating behavioral adjustments accordingly. However, no research has investigated a cause-and-effect interplay between cognitive control engagement and overriding erroneous interpretations in real-time. Using a novel cross-task paradigm, we show that Stroop-conflict detection, which mobilizes cognitive control procedures, subsequently facilitates listeners’ incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed Stroop-incongruent versus-congruent items, listeners’ eye-movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive control engagement therefore accelerates sentence re-interpretation processes, even as linguistic input is still unfolding. PMID:26957521

  13. Safety of the Wearable Cardioverter Defibrillator (WCD) in Patients with Implanted Pacemakers.

    PubMed

    Schmitt, Joern; Abaci, Guezine; Johnson, Victoria; Erkapic, Damir; Gemein, Christopher; Chasan, Ritvan; Weipert, Kay; Hamm, Christian W; Klein, Helmut U

    2017-03-01

    The wearable cardioverter defibrillator (WCD) is an important approach for better risk stratification, applied to patients considered to be at high risk of sudden arrhythmic death. Patients with implanted pacemakers may also become candidates for use of the WCD. However, there is a potential risk that pacemaker signals may mislead the WCD detection algorithm and cause inappropriate WCD shock delivery. The aim of the study was to test the impact of different types of pacing, various right ventricular (RV) lead positions, and pacing modes for potential misleading of the WCD detection algorithm. Sixty patients with implanted pacemakers received the WCD for a short time and each pacing mode (AAI, VVI, and DDD) was tested for at least 30 seconds in unipolar and bipolar pacing configuration. In case of triggering the WCD detection algorithm and starting the sequence of arrhythmia alarms, shock delivery was prevented by pushing of the response buttons. In six of 60 patients (10%), continuous unipolar pacing in DDD mode triggered the WCD detection algorithm. In no patient, triggering occurred with bipolar DDD pacing, unipolar and bipolar AAI, and VVI pacing. Triggering was independent of pacing amplitude, RV pacing lead position, and pulse generator implantation site. Unipolar DDD pacing bears a high risk of false triggering of the WCD detection algorithm. Other types of unipolar pacing and all bipolar pacing modes do not seem to mislead the WCD detection algorithm. Therefore, patients with no reprogrammable unipolar DDD pacing should not become candidates for the WCD. © 2016 Wiley Periodicals, Inc.

  14. New progress in snake mitochondrial gene rearrangement.

    PubMed

    Chen, Nian; Zhao, Shujin

    2009-08-01

    To further understand the evolution of snake mitochondrial genomes, the complete mitochondrial DNA (mtDNA) sequences were determined for representative species from two snake families: the Many-banded krait, the Banded krait, the Chinese cobra, the King cobra, the Hundred-pace viper, the Short-tailed mamushi, and the Chain viper. Thirteen protein-coding genes, 22-23 tRNA genes, 2 rRNA genes, and 2 control regions were identified in these mtDNAs. Duplication of the control region and translocation of the tRNAPro gene were two notable features of the snake mtDNAs. These results from the gene rearrangement comparisons confirm the correctness of traditional classification schemes and validate the utility of comparing complete mtDNA sequences for snake phylogeny reconstruction.

  15. Anza palaeoichnological site. Late Cretaceous. Morocco. Part II. Problems of large dinosaur trackways and the first African Macropodosaurus trackway

    NASA Astrophysics Data System (ADS)

    Masrour, Moussa; Lkebir, Noura; Pérez-Lorente, Félix

    2017-10-01

    The Anza site shows large ichnological surfaces indicating the coexistence in the same area of different vertebrate footprints (dinosaur and pterosaur) and of different types (tridactyl and tetradactyl, semiplantigrade and rounded without digit marks) and the footprint variability of long trackways. This area may become a world reference in ichnology because it contains the second undebatable African site with Cretaceous pterosaur footprints - described in part I - and the first African site with Macropodosaurus footprints. In this work, problems related to long trackways are also analyzed, such as their sinuosity, the order-disorder of the variability (long-short) of the pace length and the difficulty of morphological classification of the theropod footprints due to their morphological variability.

  16. Variables affecting the manifestation of and intensity of pacing behavior: A preliminary case study in zoo-housed polar bears.

    PubMed

    Cless, Isabelle T; Lukas, Kristen E

    2017-09-01

    High-speed video analysis was used to quantify two aspects of gait in 10 zoo-housed polar bears. These two variables were then examined as to how they differed in the conditions of pacing versus locomoting for each bear. Percent difference calculations measured the difference between pacing and locomoting data for each bear. We inferred that the higher the percent difference between pacing and locomoting in a given subject, the more intense the pacing may be. The percent difference values were analyzed alongside caregiver survey data defining the locations, frequency, and anticipatory nature of pacing in each bear, as well as each bear's age and sex, to determine whether any variables were correlated. The frequency and intensity of pacing behavior were not correlated. However, location of pacing was significantly correlated both with the subjects' age and whether or not the subject was classified as an anticipatory pacer. Bears appeared to select specific spots within their exhibits to pace, and the location therefore seemed tied to underlying motivation for the behavior. Additionally, bears that were classified in the survey as pacing anticipatorily displayed significantly more intense pacing behavior as quantified by gait analysis. © 2017 Wiley Periodicals, Inc.

  17. 42 CFR 460.122 - PACE organization's appeals process.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Participant Rights § 460.122 PACE organization's appeals process. For purposes...

  18. 42 CFR 460.122 - PACE organization's appeals process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Participant Rights § 460.122 PACE organization's appeals process. For purposes...

  19. 42 CFR 460.170 - Reinstatement in PACE.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Participant Enrollment and Disenrollment § 460.170 Reinstatement in PACE. (a) A previously...

  20. 42 CFR 460.170 - Reinstatement in PACE.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Participant Enrollment and Disenrollment § 460.170 Reinstatement in PACE. (a) A previously...

  1. Assessment of Metronidazole Susceptibility in Helicobacter pylori: Statistical Validation and Error Rate Analysis of Breakpoints Determined by the Disk Diffusion Test

    PubMed Central

    Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José

    1999-01-01

    Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543

  2. Effects of pacing magnitudes and forms on bistability width in a modeled ventricular tissue

    NASA Astrophysics Data System (ADS)

    Huang, Xiaodong; Liu, Xuemei; Zheng, Lixian; Mi, Yuanyuan; Qian, Yu

    2013-07-01

    Bistability in periodically paced cardiac tissue is relevant to cardiac arrhythmias and its control. In the present paper, one-dimensional tissue of the phase I Luo-Rudy model is numerically investigated. The effects of various parameters of pacing signals on bistability width are studied. The following conclusions are obtained: (i) Pacing can be classified into two types: pulsatile and sinusoidal types. Pulsatile pacing reduces bistability width as its magnitude is increased. Sinusoidal pacing increases the width as its amplitude is increased. (ii) In a pacing period the hyperpolarizing part plays a more important role than the depolarizing part. Variations of the hyperpolarizing ratio in a period evidently change the width of bistability and its variation tendency. (iii) A dynamical mechanism is proposed to qualitatively explain the phenomena, which reveals the reason for the different effects of pulsatile and sinusoidal pacing on bistability. The methods for changing bistability width by external pacing may help control arrhythmias in cardiology.

  3. Predictors of temporary epicardial pacing wires use after valve surgery

    PubMed Central

    2014-01-01

    Background Although temporary cardiac pacing is infrequently needed, temporary epicardial pacing wires are routinely inserted after valve surgery. As they are associated with infrequent, but life threatening complications, and the decreased need for postoperative pacing in a group of low risk patients; this study aims to identify the predictors of temporary cardiac pacing after valve surgery. Methods A retrospective analysis of data collected prospectively on 400 consecutive valve surgery patients between May 2002 and December 2012 was performed. Patients were grouped according to avoidance or insertion of temporary pacing wires, and were further subdivided according to temporary cardiac pacing need. Multiple logistic regression was used to determine the predictors of temporary cardiac pacing. Results 170 (42.5%) patients did not have insertion of temporary pacing wires and none of them needed temporary pacing. 230 (57.5%) patients had insertion of temporary pacing wires and among these, only 55 (23.9%) required temporary pacing who were compared with the remaining 175 (76.1%) patients in the main analysis. The determinants of temporary cardiac pacing (adjusted odds ratios; 95% confidence interval) were as follows: increased age (1.1; 1.1, 1.3, p = 0.002), New York Heart Association class III- IV (5.6; 1.6, 20.2, p = 0.008) , pulmonary artery pressure ≥ 50 mmHg (22.0; 3.4, 142.7, p = 0.01), digoxin use (8.0; 1.3, 48.8, p = 0.024), multiple valve surgery (13.5; 1.5, 124.0, p = 0.021), aorta cross clamp time ≥ 60 minutes (7.8; 1.6, 37.2, p = 0.010), and valve annulus calcification (7.9; 2.0, 31.7, p = 0.003). Conclusion Although limited by sample size, the present results suggest that routine use of temporary epicardial pacing wires after valve surgery is only necessary for high risk patients. Preoperative identification and aggressive management of predictors of temporary cardiac pacing and the possible modulation of intraoperative techniques can decrease the need of temporary cardiac pacing. Prospective randomized controlled studies on a larger number of patients are necessary to draw solid conclusions regarding the selective use of temporary epicardial pacing wires in valve surgery. PMID:24521215

  4. 42 CFR 460.30 - Program agreement requirement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Program Agreement § 460.30 Program agreement requirement. (a) A PACE organization must...

  5. 42 CFR 460.30 - Program agreement requirement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Program Agreement § 460.30 Program agreement requirement. (a) A PACE organization must...

  6. 42 CFR 460.50 - Termination of PACE program agreement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Sanctions, Enforcement Actions, and Termination § 460.50 Termination of PACE...

  7. 42 CFR 460.50 - Termination of PACE program agreement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Sanctions, Enforcement Actions, and Termination § 460.50 Termination of PACE...

  8. 42 CFR 460.180 - Medicare payment to PACE organizations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Payment § 460.180 Medicare payment to PACE organizations. (a) Principle of...

  9. A SVM framework for fault detection of the braking system in a high speed train

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Yan-Fu; Zio, Enrico

    2017-03-01

    In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.

  10. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  11. A Q-backpropagated time delay neural network for diagnosing severity of gait disturbances in Parkinson's disease.

    PubMed

    Nancy Jane, Y; Khanna Nehemiah, H; Arputharaj, Kannan

    2016-04-01

    Parkinson's disease (PD) is a movement disorder that affects the patient's nervous system and health-care applications mostly uses wearable sensors to collect these data. Since these sensors generate time stamped data, analyzing gait disturbances in PD becomes challenging task. The objective of this paper is to develop an effective clinical decision-making system (CDMS) that aids the physician in diagnosing the severity of gait disturbances in PD affected patients. This paper presents a Q-backpropagated time delay neural network (Q-BTDNN) classifier that builds a temporal classification model, which performs the task of classification and prediction in CDMS. The proposed Q-learning induced backpropagation (Q-BP) training algorithm trains the Q-BTDNN by generating a reinforced error signal. The network's weights are adjusted through backpropagating the generated error signal. For experimentation, the proposed work uses a PD gait database, which contains gait measures collected through wearable sensors from three different PD research studies. The experimental result proves the efficiency of Q-BP in terms of its improved classification accuracy of 91.49%, 92.19% and 90.91% with three datasets accordingly compared to other neural network training algorithms. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  13. A data-driven modeling approach to stochastic computation for low-energy biomedical devices.

    PubMed

    Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen

    2011-01-01

    Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.

  14. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    PubMed

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. DefenseLink Special: Travels with Pace, March 2006

    Science.gov Websites

    Us Travels with Pace Chairman of the Joint Chiefs of Staff Marine Gen. Peter Pace U.S. Marine Gen . Peter Pace, chairman of the Joint Chiefs of Staff, speaks to students attending the Turkish War College U.S. Air Force Staff Sgt. D. Myles Cullen Hi-Res Pace Wraps Up Visit to Allied Nations WASHINGTON

  16. Classification Management. Journal of the National Classification Management Society, Volume 18, 1982,

    DTIC Science & Technology

    1983-01-01

    changes. Concurrently, CIA formed and AD HOC esting to step back and look at the U.S. security Intelligence Community Working Group to re...administrative error; to prevent embarrassment to expected damage will be. If you foresee the dam- a person, organization, or agency; to restrain com- age...the decision will be to classify the informa- petition; or to pTevent or delay the public release of tion. But note that in this thought process, you

  17. Effect of short-term rapid ventricular pacing followed by pacing interruption on arterial blood pressure in healthy pigs and pigs with tachycardiomyopathy.

    PubMed

    Skrzypczak, P; Zyśko, D; Pasławska, U; Noszczyk-Nowak, A; Janiszewski, A; Gajek, J; Nicpoń, J; Kiczak, L; Bania, J; Zacharski, M; Tomaszek, A; Jankowska, E A; Ponikowski, P; Witkiewicz, W

    2014-01-01

    Ventricular tachycardia may lead to haemodynamic deterioration and, in the case of long term persistence, is associated with the development of tachycardiomyopathy. The effect of ventricular tachycardia on haemodynamics in individuals with tachycardiomyopathy, but being in sinus rhythm has not been studied. Rapid ventricular pacing is a model of ventricular tachycardia. The aim of this study was to determine the effect of rapid ventricular pacing on blood pressure in healthy animals and those with tachycardiomyopathy. A total of 66 animals were studied: 32 in the control group and 34 in the study group. The results of two groups of examinations were compared: the first performed in healthy animals (133 examinations) and the second performed in animals paced for at least one month (77 examinations). Blood pressure measurements were taken during chronic pacing--20 min after onset of general anaesthesia, in baseline conditions (20 min after pacing cessation or 20 min after onset of general anaesthesia in healthy animals) and immediately after short-term rapid pacing. In baseline conditions significantly higher systolic and diastolic blood pressure was found in healthy animals than in those with tachycardiomyopathy. During an event of rapid ventricular pacing, a significant decrease in systolic and diastolic blood pressure was found in both groups of animals. In the group of chronically paced animals the blood pressure was lower just after restarting ventricular pacing than during chronic pacing. Cardiovascular adaptation to ventricular tachycardia develops with the length of its duration. Relapse of ventricular tachycardia leads to a blood pressure decrease more pronounced than during chronic ventricular pacing.

  18. Validation of tool mark analysis of cut costal cartilage.

    PubMed

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles

    2012-03-01

    This study was designed to establish the potential error rate associated with the generally accepted method of tool mark analysis of cut marks in costal cartilage. Three knives with different blade types were used to make experimental cut marks in costal cartilage of pigs. Each cut surface was cast, and each cast was examined by three analysts working independently. The presence of striations, regularity of striations, and presence of a primary and secondary striation pattern were recorded for each cast. The distance between each striation was measured. The results showed that striations were not consistently impressed on the cut surface by the blade's cutting edge. Also, blade type classification by the presence or absence of striations led to a 65% misclassification rate. Use of the classification tree and cross-validation methods and inclusion of the mean interstriation distance decreased the error rate to c. 50%. © 2011 American Academy of Forensic Sciences.

  19. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  20. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  1. Anatomical approach to permanent His bundle pacing: Optimizing His bundle capture.

    PubMed

    Vijayaraman, Pugazhendhi; Dandamudi, Gopi

    2016-01-01

    Permanent His bundle pacing is a physiological alternative to right ventricular pacing. In this article we describe our approach to His bundle pacing in patients with AV nodal and intra-Hisian conduction disease. It is essential for the implanters to understand the anatomic variations of the His bundle course and its effect on the type of His bundle pacing achieved. We describe several case examples to illustrate our anatomical approach to permanent His bundle pacing in this article. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. The intercrater plains of Mercury and the Moon: Their nature, origin and role in terrestrial planet evolution. Measurement and errors of crater statistics. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Leake, M. A.

    1982-01-01

    Planetary imagery techniques, errors in measurement or degradation assignment, and statistical formulas are presented with respect to cratering data. Base map photograph preparation, measurement of crater diameters and sampled area, and instruments used are discussed. Possible uncertainties, such as Sun angle, scale factors, degradation classification, and biases in crater recognition are discussed. The mathematical formulas used in crater statistics are presented.

  3. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma.

    PubMed

    Zhang, Bin; He, Xin; Ouyang, Fusheng; Gu, Dongsheng; Dong, Yuhao; Zhang, Lu; Mo, Xiaokai; Huang, Wenhui; Tian, Jie; Zhang, Shuixing

    2017-09-10

    We aimed to identify optimal machine-learning methods for radiomics-based prediction of local failure and distant failure in advanced nasopharyngeal carcinoma (NPC). We enrolled 110 patients with advanced NPC. A total of 970 radiomic features were extracted from MRI images for each patient. Six feature selection methods and nine classification methods were evaluated in terms of their performance. We applied the 10-fold cross-validation as the criterion for feature selection and classification. We repeated each combination for 50 times to obtain the mean area under the curve (AUC) and test error. We observed that the combination methods Random Forest (RF) + RF (AUC, 0.8464 ± 0.0069; test error, 0.3135 ± 0.0088) had the highest prognostic performance, followed by RF + Adaptive Boosting (AdaBoost) (AUC, 0.8204 ± 0.0095; test error, 0.3384 ± 0.0097), and Sure Independence Screening (SIS) + Linear Support Vector Machines (LSVM) (AUC, 0.7883 ± 0.0096; test error, 0.3985 ± 0.0100). Our radiomics study identified optimal machine-learning methods for the radiomics-based prediction of local failure and distant failure in advanced NPC, which could enhance the applications of radiomics in precision oncology and clinical practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Higher sympathetic nerve activity during ventricular (VVI) than during dual-chamber (DDD) pacing

    NASA Technical Reports Server (NTRS)

    Taylor, J. A.; Morillo, C. A.; Eckberg, D. L.; Ellenbogen, K. A.

    1996-01-01

    OBJECTIVES: We determined the short-term effects of single-chamber ventricular pacing and dual-chamber atrioventricular (AV) pacing on directly measured sympathetic nerve activity. BACKGROUND: Dual-chamber AV cardiac pacing results in greater cardiac output and lower systemic vascular resistance than does single-chamber ventricular pacing. However, it is unclear whether these hemodynamic advantages result in less sympathetic nervous system outflow. METHODS: In 13 patients with a dual-chamber pacemaker, we recorded the electrocardiogram, noninvasive arterial pressure (Finapres), respiration and muscle sympathetic nerve activity (microneurography) during 3 min of underlying basal heart rate and 3 min of ventricular and AV pacing at rates of 60 and 100 beats/min. RESULTS: Arterial pressure was lowest and muscle sympathetic nerve activity was highest at the underlying basal heart rate. Arterial pressure increased with cardiac pacing and was greater with AV than with ventricular pacing (change in mean blood pressure +/- SE: 10 +/- 3 vs. 2 +/- 2 mm Hg at 60 beats/min; 21 +/- 5 vs. 14 +/- 2 mm Hg at 100 beats/min; p < 0.05). Sympathetic nerve activity decreased with cardiac pacing and the decline was greater with AV than with ventricular pacing (60 beats/min -40 +/- 11% vs. -17 +/- 7%; 100 beats/min -60 +/- 9% vs. -48 +/- 10%; p < 0.05). Although most patients showed a strong inverse relation between arterial pressure and muscle sympathetic nerve activity, three patients with severe left ventricular dysfunction (ejection fraction < or = 30%) showed no relation between arterial pressure and sympathetic activity. CONCLUSIONS: Short-term AV pacing results in lower sympathetic nerve activity and higher arterial pressure than does ventricular pacing, indicating that cardiac pacing mode may influence sympathetic outflow simply through arterial baroreflex mechanisms. We speculate that the greater incidence of adverse outcomes in patients treated with single-chamber ventricular rather than dual-chamber pacing may be due in part to increased sympathetic nervous outflow.

  5. Optimizing local capture of atrial fibrillation by rapid pacing: study of the influence of tissue dynamics.

    PubMed

    Uldry, Laurent; Virag, Nathalie; Jacquemet, Vincent; Vesin, Jean-Marc; Kappenberger, Lukas

    2010-12-01

    While successful termination by pacing of organized atrial tachycardias has been observed in patients, rapid pacing of AF can induce a local capture of the atrial tissue but in general no termination. The purpose of this study was to perform a systematic evaluation of the ability to capture AF by rapid pacing in a biophysical model of the atria with different dynamics in terms of conduction velocity (CV) and action potential duration (APD). Rapid pacing was applied during 30 s at five locations on the atria, for pacing cycle lengths in the range 60-110% of the mean AF cycle length (AFCL(mean)). Local AF capture could be achieved using rapid pacing at pacing sites located distal to major anatomical obstacles. Optimal pacing cycle lengths were found in the range 74-80% AFCL(mean) (capture window width: 14.6 ± 3% AFCL(mean)). An increase/decrease in CV or APD led to a significant shrinking/stretching of the capture window. Capture did not depend on AFCL, but did depend on the atrial substrate as characterized by an estimate of its wavelength, a better capture being achieved at shorter wavelengths. This model-based study suggests that a proper selection of the pacing site and cycle length can influence local capture results and that atrial tissue properties (CV and APD) are determinants of the response to rapid pacing.

  6. Right Ventricular Outflow Tract Septal Pacing Is Superior to Right Ventricular Apical Pacing

    PubMed Central

    Zou, Cao; Song, Jianping; Li, Hui; Huang, Xingmei; Liu, Yuping; Zhao, Caiming; Shi, Xin; Yang, Xiangjun

    2015-01-01

    Background The effects of right ventricular apical pacing (RVAP) and right ventricular outflow tract (RVOT) septal pacing on atrial and ventricular electrophysiology have not been thoroughly compared. Methods and Results To identify a more favorable pacing strategy with fewer adverse effects, 80 patients who had complete atrioventricular block with normal cardiac function and who were treated with either RVAP (n=42) or RVOT septal pacing (n=38) were recruited after an average of 2 years of follow‐up. The data from electrocardiography and echocardiography performed before pacemaker implantation and at the end of follow‐up were collected. The patients in the RVOT septal pacing and RVAP groups showed similar demographic and clinical characteristics before pacing treatments. After a mean follow‐up of 2 years, the final maximum P‐wave duration; P‐wave dispersion; Q‐, R‐, and S‐wave complex duration; left atrial volume index; left ventricular end‐systolic diameter; ratio of transmitral early diastolic filling velocity to mitral annular early diastolic velocity; and interventricular mechanical delay in the RVOT septal pacing group were significantly less than those in the RVAP group (P<0.05). The final left ventricular ejection fraction of the RVOT septal pacing group was significantly higher than that of the RVAP group (P<0.05). Conclusions Compared with RVAP, RVOT septal pacing has fewer adverse effects regarding atrial electrical activity and structure in patients with normal cardiac function. PMID:25896891

  7. 42 CFR 460.24 - Limit on number of PACE program agreements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.24 Limit on number of...

  8. 42 CFR 460.24 - Limit on number of PACE program agreements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.24 Limit on number of...

  9. High Dimensional Classification Using Features Annealed Independence Rules.

    PubMed

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  10. A label distance maximum-based classifier for multi-label learning.

    PubMed

    Liu, Xiaoli; Bao, Hang; Zhao, Dazhe; Cao, Peng

    2015-01-01

    Multi-label classification is useful in many bioinformatics tasks such as gene function prediction and protein site localization. This paper presents an improved neural network algorithm, Max Label Distance Back Propagation Algorithm for Multi-Label Classification. The method was formulated by modifying the total error function of the standard BP by adding a penalty term, which was realized by maximizing the distance between the positive and negative labels. Extensive experiments were conducted to compare this method against state-of-the-art multi-label methods on three popular bioinformatic benchmark datasets. The results illustrated that this proposed method is more effective for bioinformatic multi-label classification compared to commonly used techniques.

  11. Lacie phase 1 Classification and Mensuration Subsystem (CAMS) rework experiment

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Hsu, E. M.; Liszcz, C. J.

    1976-01-01

    An experiment was designed to test the ability of the Classification and Mensuration Subsystem rework operations to improve wheat proportion estimates for segments that had been processed previously. Sites selected for the experiment included three in Kansas and three in Texas, with the remaining five distributed in Montana and North and South Dakota. The acquisition dates were selected to be representative of imagery available in actual operations. No more than one acquisition per biophase were used, and biophases were determined by actual crop calendars. All sites were worked by each of four Analyst-Interpreter/Data Processing Analyst Teams who reviewed the initial processing of each segment and accepted or reworked it for an estimate of the proportion of small grains in the segment. Classification results, acquisitions and classification errors and performance results between CAMS regular and ITS rework are tabulated.

  12. Effect of Right Ventricular versus Biventricular Pacing on Electrical Remodeling in the Normal Heart

    PubMed Central

    Saba, Samir; Mehdi, Haider; Mathier, Michael A.; Islam, M. Zahadul; Salama, Guy; London, Barry

    2010-01-01

    Background Biventricular (BIV) pacing can improve cardiac function in heart failure by altering the mechanical and electrical substrates. We investigated the effect of BIV versus right ventricular (RV) pacing on the normal heart. Methods and Results Male New Zealand White rabbits (n=33) were divided into 3 groups: sham-operated (control), RV pacing, and BIV pacing groups. Four weeks after surgery, the native QT (p=0.004) interval was significantly shorter in the BIV group compared to the RV or sham-operated groups. Also, compared to rabbits in the RV group, rabbits in the BIV group had shorter RV ventricular effective refractory period (VERP) at all cycle lengths, and shorter LV paced QT interval during the drive train of stimuli and close to refractoriness (p<0.001 for all comparisons). Protein expression of the KVLQT1 was significantly increased in the BIV group compared to the RV and control groups, while protein expression of SCN5A and connexin43 was significantly decreased in the RV compared to the other study groups. Erg protein expression was significantly increased in both pacing groups compared to the controls. Conclusions In this rabbit model, we demonstrate a direct effect of BIV but not RV pacing on shortening the native QT interval as well as the paced QT interval during burst pacing and close to the VERP. These findings underscore the fact that the effect of BIV pacing is partially mediated through direct electrical remodeling and may have implications as to the effect of BIV pacing on arrhythmia incidence and burden. PMID:20042767

  13. Interatrial septum versus right atrial appendage pacing for prevention of atrial fibrillation : A meta-analysis of randomized controlled trials.

    PubMed

    Zhang, L; Jiang, H; Wang, W; Bai, J; Liang, Y; Su, Y; Ge, J

    2017-07-28

    Interatrial septum (IAS) pacing seems to be a promising strategy for the prevention of atrial fibrillation (AF); however, studies have yielded conflicting results. This meta-analysis was to compare IAS with right atrial appendage (RAA) pacing on the prevention of postpacing AF occurrence. Pubmed, MEDLINE, EMBASE and Web of Science databases were searched through October 2016 for randomized controlled trials comparing IAS with RAA pacing on the prevention of AF. Data concerning study design, patient characteristics and outcomes were extracted. Risk ratio (RR), weighted mean differences (WMD) or standardized mean differences (SMD) were calculated using fixed or random effects models. A total of 12 trials involving 1146 patients with dual-chamber pacing were included. Although IAS was superior to RAA pacing in terms of reducing the number of AF episodes (SMD = -0.29, P = 0.05), AF burden (SMD = -0.41, P = 0.008) and P -wave duration (WMD = -34.45 ms, P < 0.0001), neither permanent AF occurrence (RR = 0.94, P = 0.58) nor recurrences of AF (RR = 0.88, P = 0.36) were reduced by IAS pacing. Nevertheless, no differences were observed concerning all-cause death (RR = 1.04, P = 0.88), procedure-related events (RR = 1.17, P = 0.69) and pacing parameters between IAS and RAA pacing in the follow-up period. IAS pacing is safe and as well tolerated as RAA pacing. Although IAS pacing may fail to prevent permanent AF occurrence and recurrences of AF, it is able to not only improve interatrial conduction, but also reduce AF burden.

  14. Effects of septal pacing on P wave characteristics: the value of three-dimensional echocardiography.

    PubMed

    Szili-Torok, Tamas; Bruining, Nico; Scholten, Marcoen; Kimman, Geert-Jan; Roelandt, Jos; Jordaens, Luc

    2003-01-01

    Interatrial septum (IAS) pacing has been proposed for the prevention of paroxysmal atrial fibrillation. IAS pacing is usually guided by fluoroscopy and P wave analysis. The authors have developed a new approach for IAS pacing using intracardiac echocardiography (ICE), and examined its effects on P wave characteristics. Cross-sectional images are acquired during pullback of the ICE transducer from the superior vena cava into the inferior vena cava by an electrocardiogram- and respiration-gated technique. The right atrium and IAS are then three-dimensionally reconstructed, and the desired pacing site is selected. After lead placement and electrical testing, another three-dimensional reconstruction is performed to verify the final lead position. The study included 14 patients. IAS pacing was achieved at seven suprafossal (SF) and seven infrafossal (IF) lead locations, all confirmed by three-dimensional imaging. IAS pacing resulted in a significant reduction of P wave duration as compared to sinus rhythm (99.7 +/- 18.7 vs 140.4 +/- 8.8 ms; P < 0.01). SF pacing was associated with a greater reduction of P wave duration than IF pacing (56.1 +/- 9.9 vs 30.2 +/- 13.6 ms; P < 0.01). P wave dispersion remained unchanged during septal pacing as compared to sinus rhythm (21.4 +/- 16.1 vs 13.5 +/- 13.9 ms; NS). Three-dimensional intracardiac echocardiography can be used to guide IAS pacing. SF pacing was associated with a greater decrease in P wave duration, suggesting that it is a preferable location to decrease interatrial conduction delay.

  15. Pacing a data transfer operation between compute nodes on a parallel computer

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-09-13

    Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

  16. Sample Errors Call Into Question Conclusions Regarding Same-Sex Married Parents: A Comment on "Family Structure and Child Health: Does the Sex Composition of Parents Matter?"

    PubMed

    Paul Sullins, D

    2017-12-01

    Because of classification errors reported by the National Center for Health Statistics, an estimated 42 % of the same-sex married partners in the sample for this study are misclassified different-sex married partners, thus calling into question findings regarding same-sex married parents. Including biological parentage as a control variable suppresses same-sex/different-sex differences, thus obscuring the data error. Parentage is not appropriate as a control because it correlates nearly perfectly (+.97, gamma) with the same-sex/different-sex distinction and is invariant for the category of joint biological parents.

  17. Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM

    NASA Astrophysics Data System (ADS)

    Shima, Yoshihiro

    2018-04-01

    Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.

  18. Automated detection of cloud and cloud-shadow in single-date Landsat imagery using neural networks and spatial post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael J.; Hayes, Daniel J

    2014-01-01

    Use of Landsat data to answer ecological questions is contingent on the effective removal of cloud and cloud shadow from satellite images. We develop a novel algorithm to identify and classify clouds and cloud shadow, \\textsc{sparcs}: Spacial Procedures for Automated Removal of Cloud and Shadow. The method uses neural networks to determine cloud, cloud-shadow, water, snow/ice, and clear-sky membership of each pixel in a Landsat scene, and then applies a set of procedures to enforce spatial rules. In a comparison to FMask, a high-quality cloud and cloud-shadow classification algorithm currently available, \\textsc{sparcs} performs favorably, with similar omission errors for cloudsmore » (0.8% and 0.9%, respectively), substantially lower omission error for cloud-shadow (8.3% and 1.1%), and fewer errors of commission (7.8% and 5.0%). Additionally, textsc{sparcs} provides a measure of uncertainty in its classification that can be exploited by other processes that use the cloud and cloud-shadow detection. To illustrate this, we present an application that constructs obstruction-free composites of images acquired on different dates in support of algorithms detecting vegetation change.« less

  19. Discrimination of Aspergillus isolates at the species and strain level by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry fingerprinting.

    PubMed

    Hettick, Justin M; Green, Brett J; Buskirk, Amanda D; Kashon, Michael L; Slaven, James E; Janotka, Erika; Blachere, Francoise M; Schmechel, Detlef; Beezhold, Donald H

    2008-09-15

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) was used to generate highly reproducible mass spectral fingerprints for 12 species of fungi of the genus Aspergillus and 5 different strains of Aspergillus flavus. Prior to MALDI-TOF MS analysis, the fungi were subjected to three 1-min bead beating cycles in an acetonitrile/trifluoroacetic acid solvent. The mass spectra contain abundant peaks in the range of 5 to 20kDa and may be used to discriminate between species unambiguously. A discriminant analysis using all peaks from the MALDI-TOF MS data yielded error rates for classification of 0 and 18.75% for resubstitution and cross-validation methods, respectively. If a subset of 28 significant peaks is chosen, resubstitution and cross-validation error rates are 0%. Discriminant analysis of the MALDI-TOF MS data for 5 strains of A. flavus using all peaks yielded error rates for classification of 0 and 5% for resubstitution and cross-validation methods, respectively. These data indicate that MALDI-TOF MS data may be used for unambiguous identification of members of the genus Aspergillus at both the species and strain levels.

  20. Identifying medication error chains from critical incident reports: a new analytic approach.

    PubMed

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  1. Influence of non-level walking on pedometer accuracy.

    PubMed

    Leicht, Anthony S; Crowther, Robert G

    2009-05-01

    The YAMAX Digiwalker pedometer has been previously confirmed as a valid and reliable monitor during level walking, however, little is known about its accuracy during non-level walking activities or between genders. Subsequently, this study examined the influence of non-level walking and gender on pedometer accuracy. Forty-six healthy adults completed 3-min bouts of treadmill walking at their normal walking pace during 11 inclines (0-10%) while another 123 healthy adults completed walking up and down 47 stairs. During walking, participants wore a YAMAX Digiwalker SW-700 pedometer with the number of steps taken and registered by the pedometer recorded. Pedometer difference (steps registered-steps taken), net error (% of steps taken), absolute error (absolute % of steps taken) and gender were examined by repeated measures two-way ANOVA and Tukey's post hoc tests. During incline walking, pedometer accuracy indices were similar between inclines and gender except for a significantly greater step difference (-7+/-5 steps vs. 1+/-4 steps) and net error (-2.4+/-1.8% for 9% vs. 0.4+/-1.2% for 2%). Step difference and net error were significantly greater during stair descent compared to stair ascent while absolute error was significantly greater during stair ascent compared to stair descent. The current study demonstrated that the YAMAX Digiwalker SW-700 pedometer exhibited good accuracy during incline walking up to 10% while it overestimated steps taken during stair ascent/descent with greater overestimation during stair descent. Stair walking activity should be documented in field studies as the YAMAX Digiwalker SW-700 pedometer overestimates this activity type.

  2. Inhibitory saccadic dysfunction is associated with cerebellar injury in multiple sclerosis.

    PubMed

    Kolbe, Scott C; Kilpatrick, Trevor J; Mitchell, Peter J; White, Owen; Egan, Gary F; Fielding, Joanne

    2014-05-01

    Cognitive dysfunction is common in patients with multiple sclerosis (MS). Saccadic eye movement paradigms such as antisaccades (AS) can sensitively interrogate cognitive function, in particular, the executive and attentional processes of response selection and inhibition. Although we have previously demonstrated significant deficits in the generation of AS in MS patients, the neuropathological changes underlying these deficits were not elucidated. In this study, 24 patients with relapsing-remitting MS underwent testing using an AS paradigm. Rank correlation and multiple regression analyses were subsequently used to determine whether AS errors in these patients were associated with: (i) neurological and radiological abnormalities, as measured by standard clinical techniques, (ii) cognitive dysfunction, and (iii) regionally specific cerebral white and gray-matter damage. Although AS error rates in MS patients did not correlate with clinical disability (using the Expanded Disability Status Score), T2 lesion load or brain parenchymal fraction, AS error rate did correlate with performance on the Paced Auditory Serial Addition Task and the Symbol Digit Modalities Test, neuropsychological tests commonly used in MS. Further, voxel-wise regression analyses revealed associations between AS errors and reduced fractional anisotropy throughout most of the cerebellum, and increased mean diffusivity in the cerebellar vermis. Region-wise regression analyses confirmed that AS errors also correlated with gray-matter atrophy in the cerebellum right VI subregion. These results support the use of the AS paradigm as a marker for cognitive dysfunction in MS and implicate structural and microstructural changes to the cerebellum as a contributing mechanism for AS deficits in these patients. Copyright © 2013 Wiley Periodicals, Inc.

  3. Application of the clinical version of the narrow path walking test to identify elderly fallers.

    PubMed

    Gimmon, Yoav; Barash, Avi; Debi, Ronen; Snir, Yoram; Bar David, Yair; Grinshpon, Jacob; Melzer, Itshak

    2016-01-01

    Falling during walking is a common problem among the older population. Hence, the challenge facing clinicians is identifying who is at risk of falling during walking, for providing an effective intervention to reduce that risk. We aimed to assess whether the clinical version of the narrow path walking test (NPWT) could identify older adults who are reported falls. A total of 160 older adults were recruited and asked to recall fall events during the past year. Subjects were instructed to walk in the laboratory at a comfortable pace within a 6 meter long narrow path, 3 trials under single task (ST) and 3 trials dual task (DT) conditions without stepping outside the path (i.e., step errors). The average trial time, number of steps, trial velocity, number of step errors, and number of cognitive task errors were calculated for ST and DT. Fear of falling, performance oriented mobility assessment (POMA) and mini-metal state examination (MMSE) were measured as well. Sixty-one subjects reported that they had fallen during the past year and 99 did not. Fallers performed more steps, and were slower than non-fallers. There were no significant differences, however, in the number of steps errors, the cognitive task errors in ST and DT in POMA and MMSE. Our data demonstrates slower gait speed and more steps during the NPWT in ST and DT in fallers. There is no added value of DT over the ST for identification of faller's older adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Methods for data classification

    DOEpatents

    Garrity, George [Okemos, MI; Lilburn, Timothy G [Front Royal, VA

    2011-10-11

    The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.

  5. Pacing in Olympic track races: competitive tactics versus best performance strategy.

    PubMed

    Thiel, Christian; Foster, Carl; Banzer, Winfried; De Koning, Jos

    2012-01-01

    The purpose of this study was to describe pacing strategies in the 800 to 10,000-m Olympic finals. We asked 1) if Olympic finals differed from World Records, 2) how variable the pace was, 3) whether runners faced catastrophic events, and 4) for the winning strategy. Publically available data from the Beijing 2008 Olympic Games gathered by four transponder antennae under the 400-m track were analysed to extract descriptors of pacing strategies. Individual pacing patterns of 133 finalists were visualised using speed by distance plots. Six of eight plots differed from the patterns reported for World Records. The coefficient of running speed variation was 3.6-11.4%. In the long distance finals, runners varied their pace every 100 m by a mean 1.6-2.7%. Runners who were 'dropped' from the field achieved a stable running speed and displayed an endspurt. Top contenders used variable pacing strategies to separate themselves from the field. All races were decided during the final lap. Olympic track finalists employ pacing strategies which are different from World Record patterns. The observed micro- and macro-variations of pace may have implications for training programmes. Dropping off the pace of the leading group is an active step, and the result of interactive psychophysiological decision making.

  6. Meta-Analysis of Single-Case Research Design Studies on Instructional Pacing.

    PubMed

    Tincani, Matt; De Mers, Marilyn

    2016-11-01

    More than four decades of research on instructional pacing has yielded varying and, in some cases, conflicting findings. The purpose of this meta-analysis was to synthesize single-case research design (SCRD) studies on instructional pacing to determine the relative benefits of brisker or slower pacing. Participants were children and youth with and without disabilities in educational settings, excluding higher education. Tau-U, a non-parametric statistic for analyzing data in SCRD studies, was used to determine effect size estimates. The article extraction yielded 13 instructional pacing studies meeting contemporary standards for high quality SCRD research. Eleven of the 13 studies reported small to large magnitude effects when two or more pacing parameters were compared, suggesting that instructional pacing is a robust instructional variable. Brisker instructional pacing with brief inter-trial interval (ITI) produced small increases in correct responding and medium to large reductions in challenging behavior compared with extended ITI. Slower instructional pacing with extended wait-time produced small increases in correct responding, but also produced small increases in challenging behavior compared with brief wait-time. Neither brief ITI nor extended wait-time meets recently established thresholds for evidence-based practice, highlighting the need for further instructional pacing research. © The Author(s) 2016.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  8. Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG

    PubMed Central

    Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2015-01-01

    Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149

  9. Quality assurance of chemical ingredient classification for the National Drug File - Reference Terminology.

    PubMed

    Zheng, Ling; Yumak, Hasan; Chen, Ling; Ochs, Christopher; Geller, James; Kapusnik-Uner, Joan; Perl, Yehoshua

    2017-09-01

    The National Drug File - Reference Terminology (NDF-RT) is a large and complex drug terminology consisting of several classification hierarchies on top of an extensive collection of drug concepts. These hierarchies provide important information about clinical drugs, e.g., their chemical ingredients, mechanisms of action, dosage form and physiological effects. Within NDF-RT such information is represented using tens of thousands of roles connecting drugs to classifications. In previous studies, we have introduced various kinds of Abstraction Networks to summarize the content and structure of terminologies in order to facilitate their visual comprehension, and support quality assurance of terminologies. However, these previous kinds of Abstraction Networks are not appropriate for summarizing the NDF-RT classification hierarchies, due to its unique structure. In this paper, we present the novel Ingredient Abstraction Network (IAbN) to summarize, visualize and support the audit of NDF-RT's Chemical Ingredients hierarchy and its associated drugs. A common theme in our quality assurance framework is to use characterizations of sets of concepts, revealed by the Abstraction Network structure, to capture concepts, the modeling of which is more complex than for other concepts. For the IAbN, we characterize drug ingredient concepts as more complex if they belong to IAbN groups with multiple parent groups. We show that such concepts have a statistically significantly higher rate of errors than a control sample and identify two especially common patterns of errors. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Improvement in defect classification efficiency by grouping disposition for reticle inspection

    NASA Astrophysics Data System (ADS)

    Lai, Rick; Hsu, Luke T. H.; Chang, Peter; Ho, C. H.; Tsai, Frankie; Long, Garrett; Yu, Paul; Miller, John; Hsu, Vincent; Chen, Ellison

    2005-11-01

    As the lithography design rule of IC manufacturing continues to migrate toward more advanced technology nodes, the mask error enhancement factor (MEEF) increases and necessitates the use of aggressive OPC features. These aggressive OPC features pose challenges to reticle inspection due to high false detection, which is time-consuming for defect classification and impacts the throughput of mask manufacturing. Moreover, higher MEEF leads to stricter mask defect capture criteria so that new generation reticle inspection tool is equipped with better detection capability. Hence, mask process induced defects, which were once undetectable, are now detected and results in the increase of total defect count. Therefore, how to review and characterize reticle defects efficiently is becoming more significant. A new defect review system called ReviewSmart has been developed based on the concept of defect grouping disposition. The review system intelligently bins repeating or similar defects into defect groups and thus allows operators to review massive defects more efficiently. Compared to the conventional defect review method, ReviewSmart not only reduces defect classification time and human judgment error, but also eliminates desensitization that is formerly inevitable. In this study, we attempt to explore the most efficient use of ReviewSmart by evaluating various defect binning conditions. The optimal binning conditions are obtained and have been verified for fidelity qualification through inspection reports (IRs) of production masks. The experiment results help to achieve the best defect classification efficiency when using ReviewSmart in the mask manufacturing and development.

  11. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  12. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  13. An investigation of the usability of sound recognition for source separation of packaging wastes in reverse vending machines.

    PubMed

    Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal

    2016-10-01

    In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Her2Net: A Deep Framework for Semantic Segmentation and Classification of Cell Membranes and Nuclei in Breast Cancer Evaluation.

    PubMed

    Saha, Monjoy; Chakraborty, Chandan

    2018-05-01

    We present an efficient deep learning framework for identifying, segmenting, and classifying cell membranes and nuclei from human epidermal growth factor receptor-2 (HER2)-stained breast cancer images with minimal user intervention. This is a long-standing issue for pathologists because the manual quantification of HER2 is error-prone, costly, and time-consuming. Hence, we propose a deep learning-based HER2 deep neural network (Her2Net) to solve this issue. The convolutional and deconvolutional parts of the proposed Her2Net framework consisted mainly of multiple convolution layers, max-pooling layers, spatial pyramid pooling layers, deconvolution layers, up-sampling layers, and trapezoidal long short-term memory (TLSTM). A fully connected layer and a softmax layer were also used for classification and error estimation. Finally, HER2 scores were calculated based on the classification results. The main contribution of our proposed Her2Net framework includes the implementation of TLSTM and a deep learning framework for cell membrane and nucleus detection, segmentation, and classification and HER2 scoring. Our proposed Her2Net achieved 96.64% precision, 96.79% recall, 96.71% F-score, 93.08% negative predictive value, 98.33% accuracy, and a 6.84% false-positive rate. Our results demonstrate the high accuracy and wide applicability of the proposed Her2Net in the context of HER2 scoring for breast cancer evaluation.

  15. Relationship between left atrium catheter contact force and pacing threshold.

    PubMed

    Barrio-López, Teresa; Ortiz, Mercedes; Castellanos, Eduardo; Lázaro, Carla; Salas, Jefferson; Madero, Sergio; Almendral, Jesús

    2017-08-01

    The purpose of this study is to analyze the relationship between contact force (CF) and pacing threshold in left atrium (LA). Six to ten LA sites were studied in 28 consecutive patients with atrial fibrillation undergoing pulmonary vein isolation. Median CF, bipolar and unipolar electrogram voltage, impedance, and bipolar and unipolar thresholds for consistent constant capture and for consistent intermittent capture were measured at each site. Pacing threshold measurements were performed at 188 LA sites. Both unipolar and bipolar pacing thresholds correlated significantly with median CF; however, unipolar pacing threshold correlated better (unipolar: Pearson R -0.45; p < 0.001; Spearman Rho -0.62; p < 0.001, bipolar: Pearson R -0.39; p < 0.001; Spearman Rho -0.52; p < 0.001). Consistent constant capture threshold had better correlation with median CF than consistent intermittent capture threshold for both unipolar and bipolar pacing (Pearson R -0.45; p < 0.001 and Spearman Rho -0.62; p < 0.001 vs. Pearson R -0.35; p < 0.001; Spearman Rho -0.52; p < 0.001). The best pacing threshold cutoff point to detect a good CF (>10 g) was 3.25 mA for unipolar pacing with 69% specificity and 73% sensitivity. Both increased to 80% specificity and 74% sensitivity for sites with normal bipolar voltage and a pacing threshold cutoff value of 2.85 mA. Pacing thresholds correlate with CF in human not previously ablated LA. Since the combination of a normal bipolar voltage and a unipolar pacing threshold <2.85 mA provide reasonable parameters of validity, pacing threshold could be of interest as a surrogate for CF in LA.

  16. A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification.

    PubMed

    Zhengming Li; Zhihui Lai; Yong Xu; Jian Yang; Zhang, David

    2017-02-01

    Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.

  17. Convolutional neural network with transfer learning for rice type classification

    NASA Astrophysics Data System (ADS)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  18. Application of partial least squares near-infrared spectral classification in diabetic identification

    NASA Astrophysics Data System (ADS)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  19. Saccadic Eye Movements in Anorexia Nervosa

    PubMed Central

    Phillipou, Andrea; Rossell, Susan Lee; Gurvich, Caroline; Hughes, Matthew Edward; Castle, David Jonathan; Nibbs, Richard Grant; Abel, Larry Allen

    2016-01-01

    Background Anorexia Nervosa (AN) has a mortality rate among the highest of any mental illness, though the factors involved in the condition remain unclear. Recently, the potential neurobiological underpinnings of the condition have become of increasing interest. Saccadic eye movement tasks have proven useful in our understanding of the neurobiology of some other psychiatric illnesses as they utilise known brain regions, but to date have not been examined in AN. The aim of this study was to investigate whether individuals with AN differ from healthy individuals in performance on a range of saccadic eye movements tasks. Methods 24 females with AN and 25 healthy individuals matched for age, gender and premorbid intelligence participated in the study. Participants were required to undergo memory-guided and self-paced saccade tasks, and an interleaved prosaccade/antisaccade/no-go saccade task while undergoing functional magnetic resonance imaging (fMRI). Results AN participants were found to make prosaccades of significantly shorter latency than healthy controls. AN participants also made an increased number of inhibitory errors on the memory-guided saccade task. Groups did not significantly differ in antisaccade, no-go saccade or self-paced saccade performance, or fMRI findings. Discussion The results suggest a potential role of GABA in the superior colliculus in the psychopathology of AN. PMID:27010196

  20. Multiple Category-Lot Quality Assurance Sampling: A New Classification System with Application to Schistosomiasis Control

    PubMed Central

    Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello

    2012-01-01

    Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333

  1. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    PubMed

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Triple-site pacing for cardiac resynchronization in permanent atrial fibrillation - Acute phase results from a prospective observational study.

    PubMed

    Marques, Pedro; Nobre Menezes, Miguel; Lima da Silva, Gustavo; Bernardes, Ana; Magalhães, Andreia; Cortez-Dias, Nuno; Carpinteiro, Luís; de Sousa, João; Pinto, Fausto J

    2016-06-01

    Multi-site pacing is emerging as a new method for improving response to cardiac resynchronization therapy (CRT), but has been little studied, especially in patients with atrial fibrillation. We aimed to assess the effects of triple-site (Tri-V) vs. biventricular (Bi-V) pacing on hemodynamics and QRS duration. This was a prospective observational study of patients with permanent atrial fibrillation and ejection fraction <40% undergoing CRT implantation (n=40). One right ventricular (RV) lead was implanted in the apex and another in the right ventricular outflow tract (RVOT) septal wall. A left ventricular (LV) lead was implanted in a conventional venous epicardial position. Cardiac output (using the FloTrac™ Vigileo™ system), mean QRS and ejection fraction were calculated. Mean cardiac output was 4.81±0.97 l/min with Tri-V, 4.68±0.94 l/min with RVOT septal and LV pacing, and 4.68±0.94 l/min with RV apical and LV pacing (p<0.001 for Tri-V vs. both BiV). Mean pre-implantation QRS was 170±25 ms, 123±18 ms with Tri-V, 141±25 ms with RVOT septal pacing and LV pacing and 145±19 with RV apical and LV pacing (p<0.001 for Tri-V vs. both BiV and pre-implantation). Mean ejection fraction was significantly higher with Tri-V (30±11%) vs. Bi-V pacing (28±12% with RVOT septal and LV pacing and 28±11 with RV apical and LV pacing) and pre-implantation (25±8%). Tri-V pacing produced higher cardiac output and shorter QRS duration than Bi-V pacing. This may have a significant impact on the future of CRT. Copyright © 2016 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  3. Risk perception influences athletic pacing strategy.

    PubMed

    Micklewright, Dominic; Parry, David; Robinson, Tracy; Deacon, Greg; Renfree, Andrew; St Clair Gibson, Alan; Matthews, William J

    2015-05-01

    The objective of this study is to examine risk taking and risk perception associations with perceived exertion, pacing, and performance in athletes. Two experiments were conducted in which risk perception was assessed using the domain-specific risk taking (DOSPERT) scale in 20 novice cyclists (experiment 1) and 32 experienced ultramarathon runners (experiment 2). In experiment 1, participants predicted their pace and then performed a 5-km maximum effort cycling time trial on a calibrated Kingcycle mounted bicycle. Split times and perceived exertion were recorded every kilometer. In experiment 2, each participant predicted their split times before running a 100-km ultramarathon. Split times and perceived exertion were recorded at seven checkpoints. In both experiments, higher and lower risk perception groups were created using median split of DOSPERT scores. In experiment 1, pace during the first kilometer was faster among lower risk perceivers compared with higher risk perceivers (t(18) = 2.0, P = 0.03) and faster among higher risk takers compared with lower risk takers (t(18) = 2.2, P = 0.02). Actual pace was slower than predicted pace during the first kilometer in both the higher risk perceivers (t(9) = -4.2, P = 0.001) and lower risk perceivers (t(9) = -1.8, P = 0.049). In experiment 2, pace during the first 36 km was faster among lower risk perceivers compared with higher risk perceivers (t(16) = 2.0, P = 0.03). Irrespective of risk perception group, actual pace was slower than predicted pace during the first 18 km (t(16) = 8.9, P < 0.001) and from 18 to 36 km (t(16) = 4.0, P < 0.001). In both experiments, there was no difference in performance between higher and lower risk perception groups. Initial pace is associated with an individual's perception of risk, with low perceptions of risk being associated with a faster starting pace. Large differences between predicted and actual pace suggest that the performance template lacks accuracy, perhaps indicating greater reliance on momentary pacing decisions rather than preplanned strategy.

  4. The effect of visitor number and spice provisioning in pacing expression by jaguars evaluated through a case study.

    PubMed

    Vidal, L S; Guilherme, F R; Silva, V F; Faccio, M C S R; Martins, M M; Briani, D C

    2016-06-01

    Captive animals exhibit stereotypic pacing in response to multiple causes, including the inability to escape from human contact. Environmental enrichment techniques can minimize pacing expression. By using an individual-based approach, we addressed whether the amount of time two males and a female jaguar (Panthera onca) devote to pacing varied with the number of visitors and tested the effectiveness of cinnamon and black pepper in reducing pacing. The amount of time that all jaguars engaged in pacing increased significantly with the number of visitors. Despite the difference between the males regarding age and housing conditions, both devoted significantly less time to pacing following the addition of both spices, which indicates their suitability as enrichment techniques. Mean time devoted to pacing among the treatments did not differ for the female. Our findings pointed out to the validity of individual-based approaches, as they can reveal how suitable olfactory stimuli are to minimizing stereotypies irrespective of particular traits.

  5. Microscopic saw mark analysis: an empirical approach.

    PubMed

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles

    2015-01-01

    Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.

  6. Prevention of Adverse Electrical and Mechanical Remodeling with Bi-Ventricular Pacing in a Rabbit Model of Myocardial Infarction

    PubMed Central

    Saba, Samir; Mathier, Michael A.; Mehdi, Haider; Gursoy, Erdal; Liu, Tong; Choi, Bum-Rak; Salama, Guy; London, Barry

    2008-01-01

    Background: Biventricular (BIV) pacing can improve cardiac function in heart failure (HF). Objective: To investigate the mechanisms of benefit of BIV pacing using a rabbit model of myocardial infarction (MI). Methods: New Zealand White rabbits were divided into 4 groups: sham-operated (C), MI with no pacing (MI), MI with right ventricular pacing (MI+RV), and MI with BIV pacing (MI+BIV), and underwent serial electrocardiograms and echocardiograms. At 4 weeks, hearts were excised and tissue was extracted from various areas of the left ventricle (LV). Results: Four weeks after coronary ligation, BIV pacing prevented systolic and diastolic dilation of the LV as well as the reduction in its fractional shortening, restored the QRS width and the rate-dependent QT intervals to their baseline values, and prevented the decline of the ether-a-go-go (erg) protein levels. This prevention of remodeling was not documented in the MI+RV groups. Conclusions: In this rabbit model of BIV pacing and MI, we demonstrate prevention of adverse mechanical and electrical remodeling of the heart. These changes may underlie some of the benefits seen with BIV pacing in HF patients with more severe LV dysfunction. PMID:18180026

  7. The efficacy of self-paced study in multitrial learning.

    PubMed

    de Jonge, Mario; Tabbers, Huib K; Pecher, Diane; Jang, Yoonhee; Zeelenberg, René

    2015-05-01

    In 2 experiments we investigated the efficacy of self-paced study in multitrial learning. In Experiment 1, native speakers of English studied lists of Dutch-English word pairs under 1 of 4 imposed fixed presentation rate conditions (24 × 1 s, 12 × 2 s, 6 × 4 s, or 3 × 8 s) and a self-paced study condition. Total study time per list was equated for all conditions. We found that self-paced study resulted in better recall performance than did most of the fixed presentation rates, with the exception of the 12 × 2 s condition, which did not differ from the self-paced condition. Additional correlational analyses suggested that the allocation of more study time to difficult pairs than to easy pairs might be a beneficial strategy for self-paced learning. Experiment 2 was designed to test this hypothesis. In 1 condition, participants studied word pairs in a self-paced fashion without any restrictions. In the other condition, participants studied word pairs in a self-paced fashion but total study time per item was equated. The results showed that allowing self-paced learners to freely allocate study time over items resulted in better recall performance. (c) 2015 APA, all rights reserved).

  8. Identifying chronic errors at freeway loop detectors- splashover, pulse breakup, and sensitivity settings.

    DOT National Transportation Integrated Search

    2011-03-01

    Traffic Management applications such as ramp metering, incident detection, travel time prediction, and vehicle : classification greatly depend on the accuracy of data collected from inductive loop detectors, but these data are : prone to various erro...

  9. 42 CFR 460.102 - Interdisciplinary team.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Interdisciplinary team. 460.102 Section 460.102... ELDERLY (PACE) PACE Services § 460.102 Interdisciplinary team. (a) Basic requirement. A PACE organization must meet the following requirements: (1) Establish an interdisciplinary team at each Pace center to...

  10. 42 CFR 460.102 - Interdisciplinary team.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Interdisciplinary team. 460.102 Section 460.102... ELDERLY (PACE) PACE Services § 460.102 Interdisciplinary team. (a) Basic requirement. A PACE organization must meet the following requirements: (1) Establish an interdisciplinary team at each Pace center to...

  11. 42 CFR 460.102 - Interdisciplinary team.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Interdisciplinary team. 460.102 Section 460.102... ELDERLY (PACE) PACE Services § 460.102 Interdisciplinary team. (a) Basic requirement. A PACE organization must meet the following requirements: (1) Establish an interdisciplinary team at each Pace center to...

  12. 42 CFR 460.102 - Interdisciplinary team.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Interdisciplinary team. 460.102 Section 460.102... ELDERLY (PACE) PACE Services § 460.102 Interdisciplinary team. (a) Basic requirement. A PACE organization must meet the following requirements: (1) Establish an interdisciplinary team at each Pace center to...

  13. 42 CFR 460.102 - Interdisciplinary team.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Interdisciplinary team. 460.102 Section 460.102... ELDERLY (PACE) PACE Services § 460.102 Interdisciplinary team. (a) Basic requirement. A PACE organization must meet the following requirements: (1) Establish an interdisciplinary team at each Pace center to...

  14. Feasibility of dual-chamber (DDD) pacing via a single-pass (VDD) pacing lead employing a floating atrial ring (dipole): case series, future considerations, and refinements.

    PubMed

    Kassotis, John; Voigt, Louis; Mongwa, Mbu; Reddy, C V R

    2005-01-01

    The objective of this study was to assess the feasibility of DDD pacing from a standard single-pass VDD pacemaker system. Over the past 2 decades significant advances have been made in the development of single-pass VDD pacing systems. These have been shown in long-term prospective studies to effectively preserve atrioventricular (AV)synchrony in patients with AV block and normal sinus node function. What remains problematic is the development of a single-pass pacing system capable of DDD pacing. Such a lead configuration would be useful in those patients with peripheral venous anomalies and in younger patients with congenital anomalies, which may require lead revisions in the future. In addition, with the increased use of resynchronization (biventricular pacing) therapy, the availability of a reliable single-pass lead will minimize operative time, enhance patient safety, and minimize the amount of hardware within the heart. The feasibility of DDD pacing via a Medtronic Capsure VDD-2 (Model #5038) pacing lead was evaluated. Twenty patients who presented with AV block and normal sinus node function were recruited for this study. Atrial pacing thresholds and sensitivities were assessed intraoperatively in the supine position with various respiratory maneuvers. Five patients who agreed to participate in long-term follow-up received a dual-chamber generator and were evaluated periodically over a 12-month period. Mean atrial sensitivity was 2.35 +/- 0.83 mV at the time of implantation. Effective atrial stimulation was possible in all patients at the time of implantation (mean stimulation threshold 3.08 +/- 1.04 V at 0.5 ms [bipolar], 3.34 +/- 0.95 V at 0.5 ms [unipolar]). Five of the 20 patients received a Kappa KDR701 generator, and atrial electrical properties were followed up over a 1-year period. There was no significant change in atrial pacing threshold or incidence of phrenic nerve stimulation over the 1-year follow-up. A standard single-pass VDD pacing lead system was capable of DDD pacing intraoperatively and during long-term follow-up. Despite higher than usual thresholds via the atrial dipole, pacemaker telemetry revealed < 10% use of atrial pacing dipole over a 12-month period, which would minimally deplete the pacemaker's battery. In addition, the telemetry confirmed appropriate sensing and pacing of the atrial dipole throughout the study period. At this time such systems can serve as back-up DDD pacing systems with further refinements required to optimize atrial thresholds in all patients.

  15. Measurement error, time lag, unmeasured confounding: Considerations for longitudinal estimation of the effect of a mediator in randomised clinical trials.

    PubMed

    Goldsmith, K A; Chalder, T; White, P D; Sharpe, M; Pickles, A

    2018-06-01

    Clinical trials are expensive and time-consuming and so should also be used to study how treatments work, allowing for the evaluation of theoretical treatment models and refinement and improvement of treatments. These treatment processes can be studied using mediation analysis. Randomised treatment makes some of the assumptions of mediation models plausible, but the mediator-outcome relationship could remain subject to bias. In addition, mediation is assumed to be a temporally ordered longitudinal process, but estimation in most mediation studies to date has been cross-sectional and unable to explore this assumption. This study used longitudinal structural equation modelling of mediator and outcome measurements from the PACE trial of rehabilitative treatments for chronic fatigue syndrome (ISRCTN 54285094) to address these issues. In particular, autoregressive and simplex models were used to study measurement error in the mediator, different time lags in the mediator-outcome relationship, unmeasured confounding of the mediator and outcome, and the assumption of a constant mediator-outcome relationship over time. Results showed that allowing for measurement error and unmeasured confounding were important. Contemporaneous rather than lagged mediator-outcome effects were more consistent with the data, possibly due to the wide spacing of measurements. Assuming a constant mediator-outcome relationship over time increased precision.

  16. Measurement error, time lag, unmeasured confounding: Considerations for longitudinal estimation of the effect of a mediator in randomised clinical trials

    PubMed Central

    Goldsmith, KA; Chalder, T; White, PD; Sharpe, M; Pickles, A

    2016-01-01

    Clinical trials are expensive and time-consuming and so should also be used to study how treatments work, allowing for the evaluation of theoretical treatment models and refinement and improvement of treatments. These treatment processes can be studied using mediation analysis. Randomised treatment makes some of the assumptions of mediation models plausible, but the mediator–outcome relationship could remain subject to bias. In addition, mediation is assumed to be a temporally ordered longitudinal process, but estimation in most mediation studies to date has been cross-sectional and unable to explore this assumption. This study used longitudinal structural equation modelling of mediator and outcome measurements from the PACE trial of rehabilitative treatments for chronic fatigue syndrome (ISRCTN 54285094) to address these issues. In particular, autoregressive and simplex models were used to study measurement error in the mediator, different time lags in the mediator–outcome relationship, unmeasured confounding of the mediator and outcome, and the assumption of a constant mediator–outcome relationship over time. Results showed that allowing for measurement error and unmeasured confounding were important. Contemporaneous rather than lagged mediator–outcome effects were more consistent with the data, possibly due to the wide spacing of measurements. Assuming a constant mediator–outcome relationship over time increased precision. PMID:27647810

  17. Comparison of DDD versus VVIR pacing modes in elderly patients with atrioventricular block.

    PubMed

    Kılıçaslan, Barış; Vatansever Ağca, Fahriye; Kılıçaslan, Esin Evren; Kınay, Ozan; Tigen, Kürşat; Cakır, Cayan; Nazlı, Cem; Ergene, Oktay

    2012-06-01

    Dual-chamber pacing is believed to have an advantage over single-chamber ventricular pacing. The aim of this study was to determine whether elderly patients who have implanted pacemakers for complete atrioventricular block gain significant benefits from dual-chamber (DDD) pacemakers compared with single chamber ventricular (VVIR) pacemakers. This study was designed as a randomized, two-period crossover study-each pacing mode was maintained for 1 month. Thirty patients (16 men, mean age 68.87 ± 6.89 years) with implanted DDD pacemakers were submitted to a standard protocol, which included an interview, pacemaker syndrome assessment, health related quality of life (HRQoL) questionnaires assessed by an SF-36 test, 6-minute walk test (6MWT), and transthoracic echocardiographic examinations. All of these parameters were obtained on both DDD and VVIR mode pacing. Paired data were compared. HRQoL scores were similar, and 6MWT results did not differ between the two groups. VVIR pacing elicited significant enlargement of the left atrium and impaired left ventricular diastolic functions as compared with DDD pacing. Two patients reported subclinical pacemaker syndrome, but this was not statistically significant. Our study revealed that in active elderly patients with complete heart block, DDD pacing and VVIR pacing yielded similar improvements in QoL and exercise performance. However, after a short follow-up period, we noted that VVIR pacing caused significant left atrial enlargement and impaired left ventricular diastolic functions.

  18. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    PubMed

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra

    2016-08-05

    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.

  19. Symptoms of chronic fatigue syndrome/myalgic encephalopathy are not determined by activity pacing when measured by the chronic pain coping inventory.

    PubMed

    Thompson, D P; Antcliff, D; Woby, S R

    2018-03-01

    Chronic fatigue syndrome/myalgic encephalopathy (CFS/ME) is a chronic illness which can cause significant fatigue, pain and disability. Activity pacing is frequently advocated as a beneficial coping strategy, however, it is unclear whether pacing is significantly associated with symptoms in people with CFS/ME. The first aim of this study was therefore to explore the cross-sectional associations between pacing and levels of pain, disability and fatigue. The second aim was to explore whether changes in activity pacing following participation in a symptom management programme were related to changes in clinical outcomes. Cross-sectional study exploring the relationships between pacing, pain, disability and fatigue (n=114) and pre-post treatment longitudinal study of a cohort of patients participating in a symptom management programme (n=35). Out-patient physiotherapy CFS/ME service. One-hundred and fourteen adult patients with CFS/ME. Pacing was assessed using the chronic pain coping inventory. Pain was measured using a Numeric Pain Rating Scale, fatigue with the Chalder Fatigue Scale and disability with the Fibromyalgia Impact Questionnaire. No significant associations were observed between activity pacing and levels of pain, disability or fatigue. Likewise, changes in pacing were not significantly associated with changes in pain, disability or fatigue following treatment. Activity pacing does not appear to be a significant determinant of pain, fatigue or disability in people with CFS/ME when measured with the chronic pain coping index. Consequently, the utility and measurement of pacing require further investigation. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  20. Influence of pacing site characteristics on response to cardiac resynchronization therapy.

    PubMed

    Wong, Jorge A; Yee, Raymond; Stirrat, John; Scholl, David; Krahn, Andrew D; Gula, Lorne J; Skanes, Allan C; Leong-Sit, Peter; Klein, George J; McCarty, David; Fine, Nowell; Goela, Aashish; Islam, Ali; Thompson, Terry; Drangova, Maria; White, James A

    2013-07-01

    Transmural scar occupying left ventricular (LV) pacing regions has been associated with reduced response to cardiac resynchronization therapy (CRT). However, spatial influences of lead tip delivery relative to scar at both pacing sites remain poorly explored. This study evaluated scar distribution relative to LV and right ventricular (RV) lead tip placement through coregistration of late gadolinium enhancement MRI and cardiac computed tomographic (CT) findings. Influences on CRT response were assessed by serial echocardiography. Sixty patients receiving CRT underwent preimplant late gadolinium enhancement MRI, postimplant cardiac CT, and serial echocardiography. Blinded segmental evaluations of mechanical delay, percentage scar burden, and lead tip location were performed. Response to CRT was defined as a reduction in LV end-systolic volume ≥15% at 6 months. The mean age and LV ejection fraction were 64±9 years and 25±7%, respectively. Mean scar volume was higher among CRT nonresponders for both the LV (23±23% versus 8±14% [P=0.01]) and RV pacing regions (40±32% versus 24±30% [P=0.04]). Significant pacing region scar was identified in 13% of LV pacing regions and 37% of RV pacing regions. Absence of scar in both regions was associated with an 81% response rate compared with 55%, 25%, and 0%, respectively, when the RV, LV, or both pacing regions contained scar. LV pacing region dyssynchrony was not predictive of response. Myocardial scar occupying the LV pacing region is associated with nonresponse to CRT. Scar occupying the RV pacing region is encountered at higher frequency and seems to provide a more intermediate influence on CRT response.

  1. DDD versus VVIR pacing in patients, ages 70 and over, with complete heart block.

    PubMed

    Ouali, Sana; Neffeti, Elyes; Ghoul, Karima; Hammas, Sami; Kacem, Slim; Gribaa, Rim; Remedi, Fahmi; Boughzela, Essia

    2010-05-01

    Dual-chamber pacing is believed to have an advantage over single-chamber ventricular pacing. The aim of the study was to determine whether elderly patients with implanted pacemaker for complete atrioventricular block gain significant benefit from dual-chamber (DDD) compared with single-chamber ventricular demand (VVIR). The study was designed as a double-blind randomized two-period crossover study-each pacing mode was maintained for 3 months. Thirty patients (eight men, mean age 76.5 +/- 4.3 years) with implanted PM were submitted to a standard protocol, which included an interview, functional class assessment, quality of life (QoL) questionnaires, 6-minute walk test, and transthoracic echocardiographic examinations. QoL was measured by the SF-36. All these parameters were obtained on DDD mode pacing and VVIR mode pacing. Paired data were compared. QoL was significantly different between the two groups and showed the best values in DDD. Overall, no patient preferred VVIR mode, 18 preferred DDD mode, and 12 expressed no preference. No differences in mean walking distances were observed between patients with single-chamber and dual-chamber pacing. VVI pacing elicited marked decrease in left ventricle ejection fraction and significant enlargement of the left atrium. DDD pacing resulted in significant increase of the peak systolic velocities in lateral mitral annulus and septal mitral annulus. Early diastolic velocities on both sides of mitral annulus did not change. In active elderly patients with complete heart block, DDD pacing is associated with improved quality of life and systolic ventricular function compared with VVI pacing.

  2. 42 CFR 460.186 - PACE premiums.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false PACE premiums. 460.186 Section 460.186 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE...

  3. "U-Pace" Instruction: Improving Student Success by Integrating Content Mastery and Amplified Assistance

    ERIC Educational Resources Information Center

    Reddy, Diane M.; Pfeiffer, Heidi M.; Fleming, Raymond; Ports, Katie A.; Pedrick, Laura E.; Barnack-Tavlaris, Jessica L.; Jirovec, Danielle L.; Helion, Alicia M.; Swain, Rodney A.

    2013-01-01

    "U-Pace," an instructional intervention, has potential for widespread implementation because student behavior recorded in any learning management system is used by "U-Pace" instructors to tailor coaching of student learning based on students' strengths and motivations. "U-Pace" utilizes an online learning environment…

  4. Unintended Outcomes of University-Community Partnerships: Building Organizational Capacity with PACE International Partners

    ERIC Educational Resources Information Center

    Lloyd, Kate; Clark, Lindie; Hammersley, Laura; Baker, Michaela; Rawlings-Sanaei, Felicity; D'ath, Emily

    2015-01-01

    Professional and Community Engagement (PACE) at Macquarie University provides experiential opportunities for students and staff to contribute to more just, inclusive and sustainable societies by engaging in activities with partner organizations. PACE International offers a range of opportunities with partners overseas. Underpinning PACE is a…

  5. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  6. A Discriminative Approach to EEG Seizure Detection

    PubMed Central

    Johnson, Ashley N.; Sow, Daby; Biem, Alain

    2011-01-01

    Seizures are abnormal sudden discharges in the brain with signatures represented in electroencephalograms (EEG). The efficacy of the application of speech processing techniques to discriminate between seizure and non-seizure states in EEGs is reported. The approach accounts for the challenges of unbalanced datasets (seizure and non-seizure), while also showing a system capable of real-time seizure detection. The Minimum Classification Error (MCE) algorithm, which is a discriminative learning algorithm with wide-use in speech processing, is applied and compared with conventional classification techniques that have already been applied to the discrimination between seizure and non-seizure states in the literature. The system is evaluated on 22 pediatric patients multi-channel EEG recordings. Experimental results show that the application of speech processing techniques and MCE compare favorably with conventional classification techniques in terms of classification performance, while requiring less computational overhead. The results strongly suggests the possibility of deploying the designed system at the bedside. PMID:22195192

  7. Gait dynamics to optimize fall risk assessment in geriatric patients admitted to an outpatient diagnostic clinic.

    PubMed

    Kikkert, Lisette H J; de Groot, Maartje H; van Campen, Jos P; Beijnen, Jos H; Hortobágyi, Tibor; Vuillerme, Nicolas; Lamoth, Claudine C J

    2017-01-01

    Fall prediction in geriatric patients remains challenging because the increased fall risk involves multiple, interrelated factors caused by natural aging and/or pathology. Therefore, we used a multi-factorial statistical approach to model categories of modifiable fall risk factors among geriatric patients to identify fallers with highest sensitivity and specificity with a focus on gait performance. Patients (n = 61, age = 79; 41% fallers) underwent extensive screening in three categories: (1) patient characteristics (e.g., handgrip strength, medication use, osteoporosis-related factors) (2) cognitive function (global cognition, memory, executive function), and (3) gait performance (speed-related and dynamic outcomes assessed by tri-axial trunk accelerometry). Falls were registered prospectively (mean follow-up 8.6 months) and one year retrospectively. Principal Component Analysis (PCA) on 11 gait variables was performed to determine underlying gait properties. Three fall-classification models were then built using Partial Least Squares-Discriminant Analysis (PLS-DA), with separate and combined analyses of the fall risk factors. PCA identified 'pace', 'variability', and 'coordination' as key properties of gait. The best PLS-DA model produced a fall classification accuracy of AUC = 0.93. The specificity of the model using patient characteristics was 60% but reached 80% when cognitive and gait outcomes were added. The inclusion of cognition and gait dynamics in fall classification models reduced misclassification. We therefore recommend assessing geriatric patients' fall risk using a multi-factorial approach that incorporates patient characteristics, cognition, and gait dynamics.

  8. Structure constrained semi-nonnegative matrix factorization for EEG-based motor imagery classification.

    PubMed

    Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu

    2015-05-01

    Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. The associations of insomnia with costly workplace accidents and errors: results from the America Insomnia Survey.

    PubMed

    Shahly, Victoria; Berglund, Patricia A; Coulouvrat, Catherine; Fitzgerald, Timothy; Hajak, Goeran; Roth, Thomas; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K; Kessler, Ronald C

    2012-10-01

    Insomnia is a common and seriously impairing condition that often goes unrecognized. To examine associations of broadly defined insomnia (ie, meeting inclusion criteria for a diagnosis from International Statistical Classification of Diseases, 10th Revision, DSM-IV, or Research Diagnostic Criteria/International Classification of Sleep Disorders, Second Edition) with costly workplace accidents and errors after excluding other chronic conditions among workers in the America Insomnia Survey (AIS). A national cross-sectional telephone survey (65.0% cooperation rate) of commercially insured health plan members selected from the more than 34 million in the HealthCore Integrated Research Database. Four thousand nine hundred ninety-one employed AIS respondents. Costly workplace accidents or errors in the 12 months before the AIS interview were assessed with one question about workplace accidents "that either caused damage or work disruption with a value of $500 or more" and another about other mistakes "that cost your company $500 or more." Current insomnia with duration of at least 12 months was assessed with the Brief Insomnia Questionnaire, a validated (area under the receiver operating characteristic curve, 0.86 compared with diagnoses based on blinded clinical reappraisal interviews), fully structured diagnostic interview. Eighteen other chronic conditions were assessed with medical/pharmacy claims records and validated self-report scales. Insomnia had a significant odds ratio with workplace accidents and/or errors controlled for other chronic conditions (1.4). The odds ratio did not vary significantly with respondent age, sex, educational level, or comorbidity. The average costs of insomnia-related accidents and errors ($32 062) were significantly higher than those of other accidents and errors ($21 914). Simulations estimated that insomnia was associated with 7.2% of all costly workplace accidents and errors and 23.7% of all the costs of these incidents. These proportions are higher than for any other chronic condition, with annualized US population projections of 274 000 costly insomnia-related workplace accidents and errors having a combined value of US $31.1 billion. Effectiveness trials are needed to determine whether expanded screening, outreach, and treatment of workers with insomnia would yield a positive return on investment for employers.

  10. Consistent latent position estimation and vertex classification for random dot product graphs.

    PubMed

    Sussman, Daniel L; Tang, Minh; Priebe, Carey E

    2014-01-01

    In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph derived from Wikipedia.

  11. Classification of JERS-1 Image Mosaic of Central Africa Using A Supervised Multiscale Classifier of Texture Features

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan; DeGrandi, Franco; Simard, Marc; Podest, Erika

    1999-01-01

    In this paper, a multiscale approach is introduced to classify the Japanese Research Satellite-1 (JERS-1) mosaic image over the Central African rainforest. A series of texture maps are generated from the 100 m mosaic image at various scales. Using a quadtree model and relating classes at each scale by a Markovian relationship, the multiscale images are classified from course to finer scale. The results are verified at various scales and the evolution of classification is monitored by calculating the error at each stage.

  12. Pacing for neurally mediated syncope: is placebo powerless?

    PubMed

    Brignole, M; Sutton, R

    2007-01-01

    After two recent controlled trials failed to prove superiority of cardiac pacing over placebo in patients affected by neurally mediated syncope, a widely accepted opinion is that cardiac pacing therapy is not very effective and that a strong placebo effect exists. To measure the effect of placebo pacing therapy. We compared the recurrence rate of syncope during placebo vs. no treatment in controlled trials of drug or pacing therapy. Syncope recurred in 38% of 252 patients randomized to placebo pooled from five trials vs. 34% of 881 patients randomized to no treatment pooled from eight trials. The corresponding recurrence rate with active cardiac pacing was 15% in 203 patients from six trials. Placebo is not an effective therapy for neurally mediated syncope. Different selection criteria in patients who are candidates for cardiac pacing-for example, presence, absence, or severity of the cardioinhibitory reflex may separate positive from negative trials.

  13. Sensitivity and accuracy of high-throughput metabarcoding methods for early detection of invasive fish species

    EPA Science Inventory

    For early detection biomonitoring of aquatic invasive species, sensitivity to rare individuals and accurate, high-resolution taxonomic classification are critical to minimize detection errors. Given the great expense and effort associated with morphological identification of many...

  14. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  15. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  16. Noninvasive forward-scattering system for rapid detection, characterization, and identification of Listeria colonies: image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Rajwa, Bartek; Bayraktar, Bulent; Banada, Padmapriya P.; Huff, Karleigh; Bae, Euiwon; Hirleman, E. Daniel; Bhunia, Arun K.; Robinson, J. Paul

    2006-10-01

    Bacterial contamination by Listeria monocytogenes puts the public at risk and is also costly for the food-processing industry. Traditional methods for pathogen identification require complicated sample preparation for reliable results. Previously, we have reported development of a noninvasive optical forward-scattering system for rapid identification of Listeria colonies grown on solid surfaces. The presented system included application of computer-vision and patternrecognition techniques to classify scatter pattern formed by bacterial colonies irradiated with laser light. This report shows an extension of the proposed method. A new scatterometer equipped with a high-resolution CCD chip and application of two additional sets of image features for classification allow for higher accuracy and lower error rates. Features based on Zernike moments are supplemented by Tchebichef moments, and Haralick texture descriptors in the new version of the algorithm. Fisher's criterion has been used for feature selection to decrease the training time of machine learning systems. An algorithm based on support vector machines was used for classification of patterns. Low error rates determined by cross-validation, reproducibility of the measurements, and robustness of the system prove that the proposed technology can be implemented in automated devices for detection and classification of pathogenic bacteria.

  17. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  18. The efficacy of protoporphyrin as a predictive biomarker for lead exposure in canvasback ducks: effect of sample storage time

    USGS Publications Warehouse

    Franson, J.C.; Hohman, W.L.; Moore, J.L.; Smith, M.R.

    1996-01-01

    We used 363 blood samples collected from wild canvasback dueks (Aythya valisineria) at Catahoula Lake, Louisiana, U.S.A. to evaluate the effect of sample storage time on the efficacy of erythrocytic protoporphyrin as an indicator of lead exposure. The protoporphyrin concentration of each sample was determined by hematofluorometry within 5 min of blood collection and after refrigeration at 4 °C for 24 and 48 h. All samples were analyzed for lead by atomic absorption spectrophotometry. Based on a blood lead concentration of ≥0.2 ppm wet weight as positive evidence for lead exposure, the protoporphyrin technique resulted in overall error rates of 29%, 20%, and 19% and false negative error rates of 47%, 29% and 25% when hematofluorometric determinations were made on blood at 5 min, 24 h, and 48 h, respectively. False positive error rates were less than 10% for all three measurement times. The accuracy of the 24-h erythrocytic protoporphyrin classification of blood samples as positive or negative for lead exposure was significantly greater than the 5-min classification, but no improvement in accuracy was gained when samples were tested at 48 h. The false negative errors were probably due, at least in part, to the lag time between lead exposure and the increase of blood protoporphyrin concentrations. False negatives resulted in an underestimation of the true number of canvasbacks exposed to lead, indicating that hematofluorometry provides a conservative estimate of lead exposure.

  19. PACE (Revised). Resource Guide. Research & Development Series No. 240D.

    ERIC Educational Resources Information Center

    Ashmore, M. Catherine; Pritz, Sandra G.

    This resource guide contains information on the Program for Acquiring Competence in Entrepreneurship (PACE) materials, a glossary, and listings of sources of information. Introductory materials include a description of PACE, information on use of PACE materials, and objectives of the 18 units for all three levels at which they are developed. An…

  20. A Study of Instructional Methods Used in Fast-Paced Classes

    ERIC Educational Resources Information Center

    Lee, Seon-Young; Olszewski-Kubilius, Paula

    2006-01-01

    This study involved 15 secondary-level teachers who taught fast-paced classes at a university based summer program and similar regularly paced classes in their local schools in order to examine how teachers differentiate or modify instructional methods and content selections for fast-paced classes. Interviews were conducted with the teachers…

Top