Sample records for average classification rate

  1. 77 FR 59989 - Labor Surplus Area Classification Under Executive Orders

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-01

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Surplus Area Classification Under... Bureau of Labor Statistics are used in making these classifications. The average unemployment rate for all states includes data for the Commonwealth of Puerto Rico. The basic LSA classification criteria...

  2. 78 FR 63248 - Labor Surplus Area Classification under Executive Orders 12073 and 10582

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Surplus Area Classification under... Statistics unemployment estimates to make these classifications. The average unemployment rate for all states includes data for the Commonwealth of Puerto Rico. The basic LSA classification criteria include a ``floor...

  3. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography

    PubMed Central

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008

  4. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography.

    PubMed

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.

  5. 20 CFR 654.5 - Classification of labor surplus areas.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... unemployment rate for all civilian workers in the civil jurisdiction for the reference period is (1) 120 percent of the national average unemployment rate for civilian workers or higher for the reference period... shall be classified as a labor surplus area if the average unemployment rate for all civilian workers...

  6. Assessment of differences between repeated pulse wave velocity measurements in terms of 'bias' in the extrapolated cardiovascular risk and the classification of aortic stiffness: is a single PWV measurement enough?

    PubMed

    Papaioannou, T G; Protogerou, A D; Nasothimiou, E G; Tzamouranis, D; Skliros, N; Achimastos, A; Papadogiannis, D; Stefanadis, C I

    2012-10-01

    Currently, there is no recommendation regarding the minimum number of pulse wave velocity (PWV) measurements to optimize individual's cardiovascular risk (CVR) stratification. The aim of this study was to examine differences between three single consecutive and averaged PWV measurements in terms of the extrapolated CVR and the classification of aortic stiffness as normal. In 60 subjects who referred for CVR assessment, three repeated measurements of blood pressure (BP), heart rate and PWV were performed. The reproducibility was evaluated by the intraclass correlation coefficient (ICC) and mean±s.d. of differences. The absolute differences between single and averaged PWV measurements were classified as: ≤0.25, 0.26-0.49, 0.50-0.99 and ≥1 m s(-1). A difference ≥0.5 m s(-1) (corresponding to 7.5% change in CVR, meta-analysis data from >12 000 subjects) was considered as clinically meaningful; PWV values (single or averaged) were classified as normal according to respective age-corrected normal values (European Network data). Kappa statistic was used to evaluate the agreement between classifications. PWV for the first, second and third measurement was 7.0±1.9, 6.9±1.9, 6.9±2.0 m s(-1), respectively (P=0.319); BP and heart rate did not vary significantly. A good reproducibility between single measurements was observed (ICC>0.94, s.d. ranged between 0.43 and 0.64 m s(-1)). A high percent with difference ≥0.5 m s(-1) was observed between: any pair of the three single PWV measurements (26.6-38.3%); the first or second single measurement and the average of the first and second (18.3%); any single measurement and the average of three measurements (10-20%). In only up to 5% a difference ≥0.5 m s(-1) was observed between the average of three and the average of any two PWV measurements. There was no significant agreement regarding PWV classification as normal between: the first or second measurement and the averaged PWV values. There was significant agreement in classification made by the average of the first two and the average of three PWV measurements (κ=0.85, P<0.001). Even when high reproducibility in PWV measurement is succeeded single measurements provide quite variable results in terms of the extrapolated CVR and the classification of aortic stiffness as normal. The average of two PWV measurements provides similar results with the average of three.

  7. The joint use of the tangential electric field and surface Laplacian in EEG classification.

    PubMed

    Carvalhaes, C G; de Barros, J Acacio; Perreau-Guimaraes, M; Suppes, P

    2014-01-01

    We investigate the joint use of the tangential electric field (EF) and the surface Laplacian (SL) derivation as a method to improve the classification of EEG signals. We considered five classification tasks to test the validity of such approach. In all five tasks, the joint use of the components of the EF and the SL outperformed the scalar potential. The smallest effect occurred in the classification of a mental task, wherein the average classification rate was improved by 0.5 standard deviations. The largest effect was obtained in the classification of visual stimuli and corresponded to an improvement of 2.1 standard deviations.

  8. Hybrid brain-computer interface for biomedical cyber-physical system application using wireless embedded EEG systems.

    PubMed

    Chai, Rifai; Naik, Ganesh R; Ling, Sai Ho; Nguyen, Hung T

    2017-01-07

    One of the key challenges of the biomedical cyber-physical system is to combine cognitive neuroscience with the integration of physical systems to assist people with disabilities. Electroencephalography (EEG) has been explored as a non-invasive method of providing assistive technology by using brain electrical signals. This paper presents a unique prototype of a hybrid brain computer interface (BCI) which senses a combination classification of mental task, steady state visual evoked potential (SSVEP) and eyes closed detection using only two EEG channels. In addition, a microcontroller based head-mounted battery-operated wireless EEG sensor combined with a separate embedded system is used to enhance portability, convenience and cost effectiveness. This experiment has been conducted with five healthy participants and five patients with tetraplegia. Generally, the results show comparable classification accuracies between healthy subjects and tetraplegia patients. For the offline artificial neural network classification for the target group of patients with tetraplegia, the hybrid BCI system combines three mental tasks, three SSVEP frequencies and eyes closed, with average classification accuracy at 74% and average information transfer rate (ITR) of the system of 27 bits/min. For the real-time testing of the intentional signal on patients with tetraplegia, the average success rate of detection is 70% and the speed of detection varies from 2 to 4 s.

  9. 75 FR 73861 - Change in Rates and Classes of General Applicability for Competitive Products

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... under 39 U.S.C. 3632, the Governors of the Postal Service established prices and classification changes... find that the new prices and classification changes are in accordance with 39 U.S.C. 3632-3633 and 39... Commercial Plus will be 2.0 percent. C. Parcel Select On average, prices for Parcel Select, the Postal...

  10. 18 CFR 401.35 - Classification of projects for review under Section 3.8 of the Compact.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Classification of... less than a daily average rate of 100,000 gallons except when the imported water is wastewater; (18... water during any 30-day period; and (18) Any other project that the Executive Director may specially...

  11. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    PubMed

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra

    2016-08-05

    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.

  12. Detection of stress factors in crop and weed species using hyperspectral remote sensing reflectance

    NASA Astrophysics Data System (ADS)

    Henry, William Brien

    The primary objective of this work was to determine if stress factors such as moisture stress or herbicide injury stress limit the ability to distinguish between weeds and crops using remotely sensed data. Additional objectives included using hyperspectral reflectance data to measure moisture content within a species, and to measure crop injury in response to drift rates of non-selective herbicides. Moisture stress did not reduce the ability to discriminate between species. Regardless of analysis technique, the trend was that as moisture stress increased, so too did the ability to distinguish between species. Signature amplitudes (SA) of the top 5 bands, discrete wavelet transforms (DWT), and multiple indices were promising analysis techniques. Discriminant models created from one year's data set and validated on additional data sets provided, on average, approximately 80% accurate classification among weeds and crop. This suggests that these models are relatively robust and could potentially be used across environmental conditions in field scenarios. Distinguishing between leaves grown at high-moisture stress and no-stress was met with limited success, primarily because there was substantial variation among samples within the treatments. Leaf water potential (LWP) was measured, and these were classified into three categories using indices. Classification accuracies were as high as 68%. The 10 bands most highly correlated to LWP were selected; however, there were no obvious trends or patterns in these top 10 bands with respect to time, species or moisture level, suggesting that LWP is an elusive parameter to quantify spectrally. In order to address herbicide injury stress and its impact on species discrimination, discriminant models were created from combinations of multiple indices. The model created from the second experimental run's data set and validated on the first experimental run's data provided an average of 97% correct classification of soybean and an overall average classification accuracy of 65% for all species. This suggests that these models are relatively robust and could potentially be used across a wide range of herbicide applications in field scenarios. From the pooled data set, a single discriminant model was created with multiple indices that discriminated soybean from weeds 88%, on average, regardless of herbicide, rate or species. Several analysis techniques including multiple indices, signature amplitude with spectral bands as features, and wavelet analysis were employed to distinguish between herbicide-treated and nontreated plants. Classification accuracy using signature amplitude (SA) analysis of paraquat injury on soybean was better than 75% for both 1/2 and 1/8X rates at 1, 4, and 7 DAA. Classification accuracy of paraquat injury on corn was better than 72% for the 1/2X rate at 1, 4, and 7 DAA. These data suggest that hyperspectral reflectance may be used to distinguish between healthy plants and injured plants to which herbicides have been applied; however, the classification accuracies remained at 75% or higher only when the higher rates of herbicide were applied. (Abstract shortened by UMI.)

  13. Sequenced subjective accents for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Vlek, R. J.; Schaefer, R. S.; Gielen, C. C. A. M.; Farquhar, J. D. R.; Desain, P.

    2011-06-01

    Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min-1 over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.

  14. Measurement systems and indices of miners' exposure to radon daughter products in the air of mines.

    PubMed

    Domański, T

    1990-01-01

    This paper presents the classification of measurement systems that may be used for the assessment of miners' exposure to radiation in mines. The following systems were described and characterized as the Air Sampling System (ASS), the Environmental Control System (ECS), the Individual Dosimetry System (IDS), the Stream Monitoring System (SMS) and the Exhaust Monitoring System (EMS). The indices for evaluation of miners' working environments, or for assessment of individual or collective miners' exposure, were selected and determined. These are: average expected concentration (CAE), average observed concentration (CAO), average expected rate of exposure cumulation rate (EEXP), average observed exposure cumulation rate (EOBS), average effective exposure cumulation rate (EEFF). Mathematical formulae for determining all these indicators, according to the type of measurement system used in particular mines, are presented. The reliability of assessment of miners' exposure in particular measurement systems, as well as the role of the possible reference system, are discussed.

  15. Patient-initiated switching between private and public inpatient hospitalisation in Western Australia 1980 – 2001: An analysis using linked data

    PubMed Central

    Moorin, Rachael E; Holman, C D'Arcy J

    2005-01-01

    Background The aim of the study was to identify any distinct behavioural patterns in switching between public and privately insured payment classifications between successive episodes of inpatient care within Western Australia between 1980 and 2001 using a novel 'couplet' method of analysing longitudinal data. Methods The WA Data Linkage System was used to extract all hospital morbidity records from 1980 to 2001. For each individual, episodes of hospitalisation were paired into couplets, which were classified according to the sequential combination of public and privately insured episodes. Behavioural patterns were analysed using the mean intra-couplet interval and proportion of discordant couplets in each year. Results Discordant couplets were consistently associated with the longest intra-couplet intervals (ratio to the average annual mean interval being 1.35), while the shortest intra-couplet intervals were associated with public concordant couplets (0.5). Overall, privately insured patients were more likely to switch payment classification at their next admission compared with public patients (the average rate of loss across all age groups being 0.55% and 2.16% respectively). The rate of loss from the privately insured payment classification was inversely associated with time between episodes (2.49% for intervals of 0 to 13 years and 0.83% for intervals of 14 to 21 years). In all age groups, the average rate of loss from the privately insured payment classification was greater between 1981 and 1990 compared with that between 1991 and 2001 (3.45% and 3.10% per year respectively). Conclusion A small but statistically significant reduction in rate of switching away from PHI over the latter period of observation indicated that health care policies encouraging uptake of PHI implemented in the 1990s by the federal government had some of their intended impact on behaviour. PMID:15978139

  16. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    NASA Astrophysics Data System (ADS)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  17. A novel application of deep learning for single-lead ECG classification.

    PubMed

    Mathews, Sherin M; Kambhamettu, Chandra; Barner, Kenneth E

    2018-06-04

    Detecting and classifying cardiac arrhythmias is critical to the diagnosis of patients with cardiac abnormalities. In this paper, a novel approach based on deep learning methodology is proposed for the classification of single-lead electrocardiogram (ECG) signals. We demonstrate the application of the Restricted Boltzmann Machine (RBM) and deep belief networks (DBN) for ECG classification following detection of ventricular and supraventricular heartbeats using single-lead ECG. The effectiveness of this proposed algorithm is illustrated using real ECG signals from the widely-used MIT-BIH database. Simulation results demonstrate that with a suitable choice of parameters, RBM and DBN can achieve high average recognition accuracies of ventricular ectopic beats (93.63%) and of supraventricular ectopic beats (95.57%) at a low sampling rate of 114 Hz. Experimental results indicate that classifiers built into this deep learning-based framework achieved state-of-the art performance models at lower sampling rates and simple features when compared to traditional methods. Further, employing features extracted at a sampling rate of 114 Hz when combined with deep learning provided enough discriminatory power for the classification task. This performance is comparable to that of traditional methods and uses a much lower sampling rate and simpler features. Thus, our proposed deep neural network algorithm demonstrates that deep learning-based methods offer accurate ECG classification and could potentially be extended to other physiological signal classifications, such as those in arterial blood pressure (ABP), nerve conduction (EMG), and heart rate variability (HRV) studies. Copyright © 2018. Published by Elsevier Ltd.

  18. Using Baidu Search Index to Predict Dengue Outbreak in China

    NASA Astrophysics Data System (ADS)

    Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai

    2016-12-01

    This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.

  19. A comparison of blood vessel features and local binary patterns for colorectal polyp classification

    NASA Astrophysics Data System (ADS)

    Gross, Sebastian; Stehle, Thomas; Behrens, Alexander; Auer, Roland; Aach, Til; Winograd, Ron; Trautwein, Christian; Tischendorf, Jens

    2009-02-01

    Colorectal cancer is the third leading cause of cancer deaths in the United States of America for both women and men. By means of early detection, the five year survival rate can be up to 90%. Polyps can to be grouped into three different classes: hyperplastic, adenomatous, and carcinomatous polyps. Hyperplastic polyps are benign and are not likely to develop into cancer. Adenomas, on the other hand, are known to grow into cancer (adenoma-carcinoma sequence). Carcinomas are fully developed cancers and can be easily distinguished from adenomas and hyperplastic polyps. A recent narrow band imaging (NBI) study by Tischendorf et al. has shown that hyperplastic polyps and adenomas can be discriminated by their blood vessel structure. We designed a computer-aided system for the differentiation between hyperplastic and adenomatous polyps. Our development aim is to provide the medical practitioner with an additional objective interpretation of the available image data as well as a confidence measure for the classification. We propose classification features calculated on the basis of the extracted blood vessel structure. We use the combined length of the detected blood vessels, the average perimeter of the vessels and their average gray level value. We achieve a successful classification rate of more than 90% on 102 polyps from our polyp data base. The classification results based on these features are compared to the results of Local Binary Patterns (LBP). The results indicate that the implemented features are superior to LBP.

  20. Flare rates and the McIntosh active-region classifications

    NASA Technical Reports Server (NTRS)

    Bornmann, P. L.; Shaw, D.

    1994-01-01

    Multiple linear regression analysis was used to derive the effective solar flare contributions of each of the McIntosh classification parameters. The best fits to the combined average number of M- and X-class X-ray flares per day were found when the flare contributions were assumed to be multiplicative rather than additive. This suggests that nonlinear processes may amplify the effects of the following different active-region properties encoded in the McIntosh classifications: the length of the sunspot group, the size and shape of the largest spot, and the distribution of spots within the group. Since many of these active-region properties are correlated with magnetic field strengths and fluxes, we suggest that the derived correlations reflect a more fundamental relationship between flare production and the magnetic properties of the region. The derived flare contributions for the individual McIntosh parameters can be used to derive a flare rate for each of the three-parameter McIntosh classes. These derived flare rates can be interpreted as smoothed values that may provide better estimates of an active region's expected flare rate when rare classes are reported or when the multiple observing sites report slightly different classifications.

  1. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    PubMed Central

    Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909

  2. Combined EEG-fNIRS decoding of motor attempt and imagery for brain switch control: an offline study in patients with tetraplegia.

    PubMed

    Blokland, Yvonne; Spyrou, Loukianos; Thijssen, Dick; Eijsvogels, Thijs; Colier, Willy; Floor-Westerdijk, Marianne; Vlek, Rutger; Bruhn, Jorgen; Farquhar, Jason

    2014-03-01

    Combining electrophysiological and hemodynamic features is a novel approach for improving current performance of brain switches based on sensorimotor rhythms (SMR). This study was conducted with a dual purpose: to test the feasibility of using a combined electroencephalogram/functional near-infrared spectroscopy (EEG-fNIRS) SMR-based brain switch in patients with tetraplegia, and to examine the performance difference between motor imagery and motor attempt for this user group. A general improvement was found when using both EEG and fNIRS features for classification as compared to using the single-modality EEG classifier, with average classification rates of 79% for attempted movement and 70% for imagined movement. For the control group, rates of 87% and 79% were obtained, respectively, where the "attempted movement" condition was replaced with "actual movement." A combined EEG-fNIRS system might be especially beneficial for users who lack sufficient control of current EEG-based brain switches. The average classification performance in the patient group for attempted movement was significantly higher than for imagined movement using the EEG-only as well as the combined classifier, arguing for the case of a paradigm shift in current brain switch research.

  3. Neural network and wavelet average framing percentage energy for atrial fibrillation classification.

    PubMed

    Daqrouq, K; Alkhateeb, A; Ajour, M N; Morfeq, A

    2014-03-01

    ECG signals are an important source of information in the diagnosis of atrial conduction pathology. Nevertheless, diagnosis by visual inspection is a difficult task. This work introduces a novel wavelet feature extraction method for atrial fibrillation derived from the average framing percentage energy (AFE) of terminal wavelet packet transform (WPT) sub signals. Probabilistic neural network (PNN) is used for classification. The presented method is shown to be a potentially effective discriminator in an automated diagnostic process. The ECG signals taken from the MIT-BIH database are used to classify different arrhythmias together with normal ECG. Several published methods were investigated for comparison. The best recognition rate selection was obtained for AFE. The classification performance achieved accuracy 97.92%. It was also suggested to analyze the presented system in an additive white Gaussian noise (AWGN) environment; 55.14% for 0dB and 92.53% for 5dB. It was concluded that the proposed approach of automating classification is worth pursuing with larger samples to validate and extend the present study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    PubMed

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  5. Automatic classification of thermal patterns in diabetic foot based on morphological pattern spectrum

    NASA Astrophysics Data System (ADS)

    Hernandez-Contreras, D.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.; Ramirez-Cortes, J.; Renero-Carrillo, F.

    2015-11-01

    This paper presents a novel approach to characterize and identify patterns of temperature in thermographic images of the human foot plant in support of early diagnosis and follow-up of diabetic patients. Composed feature vectors based on 3D morphological pattern spectrum (pecstrum) and relative position, allow the system to quantitatively characterize and discriminate non-diabetic (control) and diabetic (DM) groups. Non-linear classification using neural networks is used for that purpose. A classification rate of 94.33% in average was obtained with the composed feature extraction process proposed in this paper. Performance evaluation and obtained results are presented.

  6. Progress toward the determination of correct classification rates in fire debris analysis.

    PubMed

    Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E

    2013-07-01

    Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.

  7. Three-column classification and Schatzker classification: a three- and two-dimensional computed tomography characterisation and analysis of tibial plateau fractures.

    PubMed

    Patange Subba Rao, Sheethal Prasad; Lewis, James; Haddad, Ziad; Paringe, Vishal; Mohanty, Khitish

    2014-10-01

    The aim of the study was to evaluate inter-observer reliability and intra-observer reproducibility between the three-column classification and Schatzker classification systems using 2D and 3D CT models. Fifty-two consecutive patients with tibial plateau fractures were evaluated by five orthopaedic surgeons. All patients were classified into Schatzker and three-column classification systems using x-rays and 2D and 3D CT images. The inter-observer reliability was evaluated in the first round and the intra-observer reliability was determined during the second round 2 weeks later. The average intra-observer reproducibility for the three-column classification was from substantial to excellent in all sub classifications, as compared with Schatzker classification. The inter-observer kappa values increased from substantial to excellent in three-column classification and to moderate in Schatzker classification The average values for three-column classification for all the categories are as follows: (I-III) k2D = 0.718, 95% CI 0.554-0.864, p < 0.0001 and average 3D = 0.874, 95% CI 0.754-0.890, p < 0.0001. For Schatzker classification system, the average values for all six categories are as follows: (I-VI) k2D = 0.536, 95% CI 0.365-0.685, p < 0.0001 and average k3D = 0.552 95% CI 0.405-0.700, p < 0.0001. The values are statistically significant. Statistically significant inter-observer values in both rounds were noted with the three-column classification, making it statistically an excellent agreement. The intra-observer reproducibility for the three-column classification improved as compared with the Schatzker classification. The three-column classification seems to be an effective way to characterise and classify fractures of tibial plateau.

  8. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  9. Boosting CNN performance for lung texture classification using connected filtering

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves

    2018-02-01

    Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.

  10. Design of a hybrid model for cardiac arrhythmia classification based on Daubechies wavelet transform.

    PubMed

    Rajagopal, Rekha; Ranganathan, Vidhyapriya

    2018-06-05

    Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.

  11. [Relationship between crown form of upper central incisors and papilla filling in Chinese Han-nationality youth].

    PubMed

    Yang, X; Le, D; Zhang, Y L; Liang, L Z; Yang, G; Hu, W J

    2016-10-18

    To explore a crown form classification method for upper central incisor which is more objective and scientific than traditional classification method based on the standardized photography technique. To analyze the relationship between crown form of upper central incisors and papilla filling in periodontally healthy Chinese Han-nationality youth. In the study, 180 periodontally healthy Chinese youth ( 75 males, and 105 females ) aged 20-30 (24.3±4.5) years were included. With the standardized upper central incisor photography technique, pictures of 360 upper central incisors were obtained. Each tooth was classified as triangular, ovoid or square by 13 experienced specialist majors in prothodontics independently and the final classification result was decided by most evaluators in order to ensure objectivity. The standardized digital photo was also used to evaluate the gingival papilla filling situation. The papilla filling result was recorded as present or absent according to naked eye observation. The papilla filling rates of different crown forms were analyzed. Statistical analyses were performed with SPSS 19.0. The proportions of triangle, ovoid and square forms of upper central incisor in Chinese Han-nationality youth were 31.4% (113/360), 37.2% (134/360) and 31.4% (113/360 ), respectively, and no statistical difference was found between the males and females. Average κ value between each two evaluators was 0.381. Average κ value was raised up to 0.563 when compared with the final classification result. In the study, 24 upper central incisors without contact were excluded, and the papilla filling rates of triangle, ovoid and square crown were 56.4% (62/110), 69.6% (87/125), 76.2% (77/101) separately. The papilla filling rate of square form was higher (P=0.007). The proportion of clinical crown form of upper central incisor in Chinese Han-nationality youth is obtained. Compared with triangle form, square form is found to favor a gingival papilla that fills the interproximal embrasure space. The consistency of the present classification method for upper central incisor is not satisfying, which indicates that a new classification method, more scientific and objective than the present one, is to be found.

  12. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  13. Stand up time in tunnel base on rock mass rating Bieniawski 1989

    NASA Astrophysics Data System (ADS)

    Nata, Refky Adi; M. S., Murad

    2017-11-01

    RMR (Rock Mass Rating), or also known as the geo mechanics classification has been modified and made as the International Standard in determination of rock mass weighting. Rock Mass Rating Classification has been developed by Bieniawski (since 1973, 1976, and 1989). The goals of this research are investigate the class of rocks base on classification rock mass rating Bieniawski 1989, to investigate the long mass of the establishment rocks, and also to investigate the distance of the opening tunnel without a support especially in underground mine. On the research measuring: strength intact rock material, RQD (Rock Quality Designation), spacing of discontinuities, condition of discontinuities, groundwater, and also adjustment for discontinuity orientations. On testing samples in the laboratory for coal obtained strong press UCS of 30.583 MPa. Based on the classification according to Bieniawski has a weight of 4. As for silt stone obtained strong press of 35.749 MPa, gained weight also by 4. From the results of the measurements obtained for coal RQD value average 97.38 %, so it has a weight of 20. While in siltstone RQD value average 90.10 % so it has weight 20 also. On the coal the average distance measured in field is 22.6 cm so as to obtain a weight of 10, while for siltstone has an average is 148 cm, so it has weight = 15. Presistence in the field vary, on coal = 57.28 cm, so it has weight is 6 and persistence on siltstone 47 cm then does it weight to 6. Base on table Rock Mass Rating according to Bieniawski 1989, aperture on coal = 0.41 mm. That is located in the range 0.1-1 mm, so it has weight is 4. Besides that, for the siltstone aperture = 21.43 mm. That is located in the range > 5 mm, so the weight = 0. Roughness condition in coal and siltstone classified into rough so it has weight 5. Infilling condition in coal and siltstone classified into none so it has weight 6. Weathering condition in coal and siltstone classified into highly weathered so it has weight 1. Groundwater condition in coal classified into dripping so it has weight 4. and siltstone classified into completely dry so it has weight 15. Discontinuity orientation in coal parallel axis of the tunnel. The range is 251°-290° so unfavorable. It has weight = -10. In siltstone also discontinuity parallel axis of the tunnel. The range located between 241°-300°. Base on weighting parameters according to Bieniawski 1989, concluded for rocks are there in class II with value is 62, and able to establishment until 6 months. For the distance of the opening tunnel without a support as far as 8 meters.

  14. Weight-elimination neural networks applied to coronary surgery mortality prediction.

    PubMed

    Ennett, Colleen M; Frize, Monique

    2003-06-01

    The objective was to assess the effectiveness of the weight-elimination cost function in improving classification performance of artificial neural networks (ANNs) and to observe how changing the a priori distribution of the training set affects network performance. Backpropagation feedforward ANNs with and without weight-elimination estimated mortality for coronary artery surgery patients. The ANNs were trained and tested on cases with 32 input variables describing the patient's medical history; the output variable was in-hospital mortality (mortality rates: training 3.7%, test 3.8%). Artificial training sets with mortality rates of 20%, 50%, and 80% were created to observe the impact of training with a higher-than-normal prevalence. When the results were averaged, weight-elimination networks achieved higher sensitivity rates than those without weight-elimination. Networks trained on higher-than-normal prevalence achieved higher sensitivity rates at the cost of lower specificity and correct classification. The weight-elimination cost function can improve the classification performance when the network is trained with a higher-than-normal prevalence. A network trained with a moderately high artificial mortality rate (artificial mortality rate of 20%) can improve the sensitivity of the model without significantly affecting other aspects of the model's performance. The ANN mortality model achieved comparable performance as additive and statistical models for coronary surgery mortality estimation in the literature.

  15. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  16. Intra-individual gait patterns across different time-scales as revealed by means of a supervised learning model using kernel-based discriminant regression.

    PubMed

    Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I

    2017-01-01

    Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of these findings, especially towards more individualized and situational diagnoses and therapy.

  17. On Adaptive Cell-Averaging CFAR (Constant False-Alarm Rate) Radar Signal Detection

    DTIC Science & Technology

    1987-10-01

    SIICILE COPY 4 F FInI Tedwill Rlmrt to October 197 00 C\\JT ON ADAPTIVE CELL-AVERA81NG CFAR I RADAR SIGNAL DETECTION Syracuse University Mourud krket...NY 13441-5700 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE (Include Security Classification) 61102F 2’ 05 J8 PD - ON ADAPTIVE CELL-AVERAGING CFAR RADAR... CFAR ). One approach to adaptive detection in nonstationary noise and clutter background is to compare the processed target signal to an adaptive

  18. A Ternary Hybrid EEG-NIRS Brain-Computer Interface for the Classification of Brain Activation Patterns during Mental Arithmetic, Motor Imagery, and Idle State.

    PubMed

    Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan

    2018-01-01

    The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.

  19. Influence of nuclei segmentation on breast cancer malignancy classification

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  20. Chocolate Classification by an Electronic Nose with Pressure Controlled Generated Stimulation

    PubMed Central

    Valdez, Luis F.; Gutiérrez, Juan Manuel

    2016-01-01

    In this work, we will analyze the response of a Metal Oxide Gas Sensor (MOGS) array to a flow controlled stimulus generated in a pressure controlled canister produced by a homemade olfactometer to build an E-nose. The built E-nose is capable of chocolate identification between the 26 analyzed chocolate bar samples and four features recognition (chocolate type, extra ingredient, sweetener and expiration date status). The data analysis tools used were Principal Components Analysis (PCA) and Artificial Neural Networks (ANNs). The chocolate identification E-nose average classification rate was of 81.3% with 0.99 accuracy (Acc), 0.86 precision (Prc), 0.84 sensitivity (Sen) and 0.99 specificity (Spe) for test. The chocolate feature recognition E-nose gives a classification rate of 85.36% with 0.96 Acc, 0.86 Prc, 0.85 Sen and 0.96 Spe. In addition, a preliminary sample aging analysis was made. The results prove the pressure controlled generated stimulus is reliable for this type of studies. PMID:27775628

  1. Chocolate Classification by an Electronic Nose with Pressure Controlled Generated Stimulation.

    PubMed

    Valdez, Luis F; Gutiérrez, Juan Manuel

    2016-10-20

    In this work, we will analyze the response of a Metal Oxide Gas Sensor (MOGS) array to a flow controlled stimulus generated in a pressure controlled canister produced by a homemade olfactometer to build an E-nose. The built E-nose is capable of chocolate identification between the 26 analyzed chocolate bar samples and four features recognition (chocolate type, extra ingredient, sweetener and expiration date status). The data analysis tools used were Principal Components Analysis (PCA) and Artificial Neural Networks (ANNs). The chocolate identification E-nose average classification rate was of 81.3% with 0.99 accuracy (Acc), 0.86 precision (Prc), 0.84 sensitivity (Sen) and 0.99 specificity (Spe) for test. The chocolate feature recognition E-nose gives a classification rate of 85.36% with 0.96 Acc, 0.86 Prc, 0.85 Sen and 0.96 Spe. In addition, a preliminary sample aging analysis was made. The results prove the pressure controlled generated stimulus is reliable for this type of studies.

  2. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    PubMed Central

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979

  3. Electromyogram whitening for improved classification accuracy in upper limb prosthesis control.

    PubMed

    Liu, Lukai; Liu, Pu; Clancy, Edward A; Scheme, Erik; Englehart

    2013-09-01

    Time and frequency domain features of the surface electromyogram (EMG) signal acquired from multiple channels have frequently been investigated for use in controlling upper-limb prostheses. A common control method is EMG-based motion classification. We propose the use of EMG signal whitening as a preprocessing step in EMG-based motion classification. Whitening decorrelates the EMG signal and has been shown to be advantageous in other EMG applications including EMG amplitude estimation and EMG-force processing. In a study of ten intact subjects and five amputees with up to 11 motion classes and ten electrode channels, we found that the coefficient of variation of time domain features (mean absolute value, average signal length and normalized zero crossing rate) was significantly reduced due to whitening. When using these features along with autoregressive power spectrum coefficients, whitening added approximately five percentage points to classification accuracy when small window lengths were considered.

  4. Spinal Cord Injury Impairs Cardiovascular Capacity in Elite Wheelchair Rugby Athletes.

    PubMed

    Gee, Cameron M; Currie, Katharine D; Phillips, Aaron A; Squair, Jordan W; Krassioukov, Andrei V

    2017-12-19

    To examine differences in heart rate (HR) responses during international wheelchair rugby competition between athletes with and without a cervical spinal cord injury (SCI) and across standardized sport classifications. Observational study. The 2015 Parapan American Games wheelchair rugby competition. Forty-three male athletes (31 ± 8 years) with a cervical SCI (n = 32) or tetraequivalent impairment (non-SCI, n = 11). Average and peak HR (HRavg and HRpeak, respectively). To characterize HR responses in accordance with an athletes' International Wheelchair Rugby Federation (IWRF) classification, we separated athletes into 3 groups: group I (IWRF classification 0.5-1.5, n = 15); group II (IWRF classification 2.0, n = 15); and group III (IWRF classification 2.5-3.5, n = 13). Athletes with SCI had lower HRavg (111 ± 14 bpm vs 155 ± 13 bpm) and HRpeak (133 ± 12 bpm vs 178 ± 13 bpm) compared with non-SCI (both P < 0.001). Average HR was higher in group III than in I (136 ± 25 bpm vs 115 ± 20 bpm, P = 0.045); however, SCI athletes showed no difference in HRavg or HRpeak between groups. Within group III, SCI athletes had lower HRavg (115 ± 6 bpm vs 160 ± 8 bpm) and HRpeak (135 ± 11 bpm vs 183 ± 11 bpm) than non-SCI athletes (both P < 0.001). This study is the first to demonstrate attenuated HR responses during competition in SCI compared with non-SCI athletes, likely due to injury to spinal autonomic pathways. Among athletes with SCI, IWRF classification was not related to differences in HR. Specific assessment of autonomic function after SCI may be able to predict HR during competition and consideration of autonomic impairments may improve the classification process.

  5. 34 CFR 222.66 - How does the Secretary determine whether a fiscally independent local educational agency is...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... independent LEA, as defined in § 222.2(c), is making a reasonable tax effort as required by § 222.63 or § 222... (referred to in this part as “tax rates”), as defined in § 222.2(c), with the tax rates of its generally... classification of real property is equal to at least 95 percent of each of the average tax rates of its generally...

  6. 34 CFR 222.66 - How does the Secretary determine whether a fiscally independent local educational agency is...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... independent LEA, as defined in § 222.2(c), is making a reasonable tax effort as required by § 222.63 or § 222... (referred to in this part as “tax rates”), as defined in § 222.2(c), with the tax rates of its generally... classification of real property is equal to at least 95 percent of each of the average tax rates of its generally...

  7. A sibling adoption study of adult attachment: The influence of shared environment on attachment states of mind

    PubMed Central

    Caspers, Kristin; Yucuis, Rebecca; Troutman, Beth; Arndt, Stephan; Langbehn, Douglas

    2009-01-01

    This study extends existing research investigating sibling concordance on attachment by examining concordance for adult attachment in a sample of 126 genetically unrelated sibling pairs. The Adult Attachment Interview (George, Kaplan, & Main, 1985; Main, Goldwyn, & Hesse, 2003) was used to assess states of mind with regard to attachment. The average age of the participants was 39 years old. The distribution of attachment classifications was independent of adoptive status. Attachment concordance rates were unassociated with gender concordance and sibling age difference. Concordance for autonomous/non-autonomous classifications was significant at 61% as was concordance for primary classifications at 53%. The concordance rate for not-unresolved/unresolved was non-significant at 67%. Our findings demonstrate similarity of working models of attachment between siblings independent of genetic relatedness between siblings and generations (i.e., parent and child). These findings extend previous research by further implicating shared environment as a major influence on sibling similarities on organized patterns of attachment in adulthood. The non-significant concordance for the unresolved classification suggests that unresolved loss or trauma may be less influenced by shared environment and more likely to be influenced by post-childhood experiences or genetic factors. PMID:18049934

  8. Wastewater Treatment by a Prototype Slow Rate Land Treatment System,

    DTIC Science & Technology

    1981-08-01

    this application of K, crop yields ing rate. To average an application of 10 mg/L of and N uptake improved significantly. While this nitrate , a loading...RECIPIENT’S CATALOG NUMBER CRREL Report 81-14 ’ & TT~~g*&IS~ -S. TYPE OF REPORT &PERIO00 COVERED ,WASTEWATER jtREATMENT BY A ROTOTYPE SLOW RATE LAN~DjJEATMENT...soluble N, mainly nitrate . Nitrate concentrations in the percolate were found to D,~", W3 mouo~ MS~ ill SLETEUnclassified ~, tS9CUIt CLASSIFICATION OF

  9. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    NASA Astrophysics Data System (ADS)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  10. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  11. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    PubMed Central

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  12. Accuracy of pedicle screw insertion by AIRO® intraoperative CT in complex spinal deformity assessed by a new classification based on technical complexity of screw insertion.

    PubMed

    Rajasekaran, S; Bhushan, Manindra; Aiyer, Siddharth; Kanna, Rishi; Shetty, Ajoy Prasad

    2018-01-09

    To develop a classification based on the technical complexity encountered during pedicle screw insertion and to evaluate the performance of AIRO ® CT navigation system based on this classification, in the clinical scenario of complex spinal deformity. 31 complex spinal deformity correction surgeries were prospectively analyzed for performance of AIRO ® mobile CT-based navigation system. Pedicles were classified according to complexity of insertion into five types. Analysis was performed to estimate the accuracy of screw placement and time for screw insertion. Breach greater than 2 mm was considered for analysis. 452 pedicle screws were inserted (T1-T6: 116; T7-T12: 171; L1-S1: 165). The average Cobb angle was 68.3° (range 60°-104°). We had 242 grade 2 pedicles, 133 grade 3, and 77 grade 4, and 44 pedicles were unfit for pedicle screw insertion. We noted 27 pedicle screw breach (medial: 10; lateral: 16; anterior: 1). Among lateral breach (n = 16), ten screws were planned for in-out-in pedicle screw insertion. Among lateral breach (n = 16), ten screws were planned for in-out-in pedicle screw insertion. Average screw insertion time was 1.76 ± 0.89 min. After accounting for planned breach, the effective breach rate was 3.8% resulting in 96.2% accuracy for pedicle screw placement. This classification helps compare the accuracy of screw insertion in range of conditions by considering the complexity of screw insertion. Considering the clinical scenario of complex pedicle anatomy in spinal deformity AIRO ® navigation showed an excellent accuracy rate of 96.2%.

  13. Digital mammography: observer performance study of the effects of pixel size on radiologists' characterization of malignant and benign microcalcifications

    NASA Astrophysics Data System (ADS)

    Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    1999-05-01

    A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.

  14. Improved wetland remote sensing in Yellowstone National Park using classification trees to combine TM imagery and ancillary environmental data

    USGS Publications Warehouse

    Wright, C.; Gallant, Alisa L.

    2007-01-01

    The U.S. Fish and Wildlife Service uses the term palustrine wetland to describe vegetated wetlands traditionally identified as marsh, bog, fen, swamp, or wet meadow. Landsat TM imagery was combined with image texture and ancillary environmental data to model probabilities of palustrine wetland occurrence in Yellowstone National Park using classification trees. Model training and test locations were identified from National Wetlands Inventory maps, and classification trees were built for seven years spanning a range of annual precipitation. At a coarse level, palustrine wetland was separated from upland. At a finer level, five palustrine wetland types were discriminated: aquatic bed (PAB), emergent (PEM), forested (PFO), scrub–shrub (PSS), and unconsolidated shore (PUS). TM-derived variables alone were relatively accurate at separating wetland from upland, but model error rates dropped incrementally as image texture, DEM-derived terrain variables, and other ancillary GIS layers were added. For classification trees making use of all available predictors, average overall test error rates were 7.8% for palustrine wetland/upland models and 17.0% for palustrine wetland type models, with consistent accuracies across years. However, models were prone to wetland over-prediction. While the predominant PEM class was classified with omission and commission error rates less than 14%, we had difficulty identifying the PAB and PSS classes. Ancillary vegetation information greatly improved PSS classification and moderately improved PFO discrimination. Association with geothermal areas distinguished PUS wetlands. Wetland over-prediction was exacerbated by class imbalance in likely combination with spatial and spectral limitations of the TM sensor. Wetland probability surfaces may be more informative than hard classification, and appear to respond to climate-driven wetland variability. The developed method is portable, relatively easy to implement, and should be applicable in other settings and over larger extents.

  15. Regularised extreme learning machine with misclassification cost and rejection cost for gene expression data classification.

    PubMed

    Lu, Huijuan; Wei, Shasha; Zhou, Zili; Miao, Yanzi; Lu, Yi

    2015-01-01

    The main purpose of traditional classification algorithms on bioinformatics application is to acquire better classification accuracy. However, these algorithms cannot meet the requirement that minimises the average misclassification cost. In this paper, a new algorithm of cost-sensitive regularised extreme learning machine (CS-RELM) was proposed by using probability estimation and misclassification cost to reconstruct the classification results. By improving the classification accuracy of a group of small sample which higher misclassification cost, the new CS-RELM can minimise the classification cost. The 'rejection cost' was integrated into CS-RELM algorithm to further reduce the average misclassification cost. By using Colon Tumour dataset and SRBCT (Small Round Blue Cells Tumour) dataset, CS-RELM was compared with other cost-sensitive algorithms such as extreme learning machine (ELM), cost-sensitive extreme learning machine, regularised extreme learning machine, cost-sensitive support vector machine (SVM). The results of experiments show that CS-RELM with embedded rejection cost could reduce the average cost of misclassification and made more credible classification decision than others.

  16. In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging

    PubMed Central

    Ibrahim, Mohd Firdaus; Ahmad Sa’ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon

    2016-01-01

    The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t-test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass. PMID:27801799

  17. In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging.

    PubMed

    Ibrahim, Mohd Firdaus; Ahmad Sa'ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon

    2016-10-27

    The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t -test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass.

  18. Using Different Standardized Methods for Species Identification: A Case Study Using Beaks from Three Ommastrephid Species

    NASA Astrophysics Data System (ADS)

    Hu, Guanyu; Fang, Zhou; Liu, Bilin; Chen, Xinjun; Staples, Kevin; Chen, Yong

    2018-04-01

    The cephalopod beak is a vital hard structure with a stable configuration and has been widely used for the identification of cephalopod species. This study was conducted to determine the best standardization method for identifying different species by measuring 12 morphological variables of the beaks of Illex argentinus, Ommastrephes bartramii, and Dosidicus gigas that were collected by Chinese jigging vessels. To remove the effects of size, these morphometric variables were standardized using three methods. The average ratios of the upper beak morphological variables and upper crest length of O. bartramii and D. gigas were found to be greater than those of I. argentinus. However, for lower beaks, only the average of LRL (lower rostrum length)/ LCL (lower crest length), LRW (lower rostrum width)/ LCL, and LLWL (lower lateral wall length)/ LCL of O. bartramii and D. gigas were greater than those of I. argentinus. The ratios of beak morphological variables and crest length were found to be all significantly different among the three species ( P < 0.001). Among the three standardization methods, the correct classification rate of stepwise discriminant analysis (SDA) was the highest using the ratios of beak morphological variables and crest length. Compared with hood length, the correct classification rate was slightly higher when using beak variables standardized by crest length using an allometric model. The correct classification rate of the lower beak was also found to be greater than that of the upper beak. This study indicates that the ratios of beak morphological variables to crest length could be used for interspecies and intraspecies identification. Meanwhile, the lower beak variables were found to be more effective than upper beak variables in classifying beaks found in the stomachs of predators.

  19. [Study on clinical effectiveness of acupuncture and moxibustion on acute Bell's facial paralysis: randomized controlled clinical observation].

    PubMed

    Wu, Bin; Li, Ning; Liu, Yi; Huang, Chang-qiong; Zhang, Yong-ling

    2006-03-01

    To investigate the adverse effects of acupuncture on the prognosis, and effectiveness of acupuncture combined with far infrared ray in the patient of acute Bell's facial paralysis within 48 h. Clinically randomized controlled trial was used, and the patients were divided into 3 groups: group A (early acupuncture group), group B (acupuncture combined with far infrared ray) and group C (acupuncture after 7 days). The facial nerve functional classification at the attack, 7 days after the attack and after treatment, the clinically cured rate of following-up of 6 months, and the average cured time, the cured time of complete facial paralysis were observed in the 3 groups. There were no significant differences among the 3 groups in the facial nerve functional classification 7 days after the attack, the clinically cured rate of following-up of 6 months and the average cured time (P > 0.05), but the cured time of complete facial paralysis in the group A and the group B were shorter than that in the group C (P < 0.05). The patient of acute Bell's facial paralysis can be treated with acupuncture and moxibustion, and traditional moxibustion can be replaced by far infrared way.

  20. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    PubMed

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  1. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    PubMed

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  2. Age group classification and gender detection based on forced expiratory spirometry.

    PubMed

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  3. Classification of oral cancers using Raman spectroscopy of serum

    NASA Astrophysics Data System (ADS)

    Sahu, Aditi; Talathi, Sneha; Sawant, Sharada; Krishna, C. Murali

    2014-03-01

    Oral cancers are the sixth most common malignancy worldwide, with low 5-year disease free survival rates, attributable to late detection due to lack of reliable screening modalities. Our in vivo Raman spectroscopy studies have demonstrated classification of normal and tumor as well as cancer field effects (CFE), the earliest events in oral cancers. In view of limitations such as requirement of on-site instrumentation and stringent experimental conditions of this approach, feasibility of classification of normal and cancer using serum was explored using 532 nm excitation. In this study, strong resonance features of β-carotenes, present differentially in normal and pathological conditions, were observed. In the present study, Raman spectra of sera of 36 buccal mucosa, 33 tongue cancers and 17 healthy subjects were recorded using Raman microprobe coupled with 40X objective using 785 nm excitation, a known source of excitation for biomedical applications. To eliminate heterogeneity, average of 3 spectra recorded from each sample was subjected to PC-LDA followed by leave-one-out-cross-validation. Findings indicate average classification efficiency of ~70% for normal and cancer. Buccal mucosa and tongue cancer serum could also be classified with an efficiency of ~68%. Of the two cancers, buccal mucosa cancer and normal could be classified with a higher efficiency. Findings of the study are quite comparable to that of our earlier study, which suggest that there exist significant differences, other than β- carotenes, between normal and cancerous samples which can be exploited for the classification. Prospectively, extensive validation studies will be undertaken to confirm the findings.

  4. Exploring the utility of narrative analysis in diagnostic decision making: picture-bound reference, elaboration, and fetal alcohol spectrum disorders.

    PubMed

    Thorne, John C; Coggins, Truman E; Carmichael Olson, Heather; Astley, Susan J

    2007-04-01

    To evaluate classification accuracy and clinical feasibility of a narrative analysis tool for identifying children with a fetal alcohol spectrum disorder (FASD). Picture-elicited narratives generated by 16 age-matched pairs of school-aged children (FASD vs. typical development [TD]) were coded for semantic elaboration and reference strategy by judges who were unaware of age, gender, and group membership of the participants. Receiver operating characteristic (ROC) curves were used to examine the classification accuracy of the resulting set of narrative measures for making 2 classifications: (a) for the 16 children diagnosed with FASD, low performance (n = 7) versus average performance (n = 9) on a standardized expressive language task and (b) FASD (n = 16) versus TD (n = 16). Combining the rates of semantic elaboration and pragmatically inappropriate reference perfectly matched a classification based on performance on the standardized language task. More importantly, the rate of ambiguous nominal reference was highly accurate in classifying children with an FASD regardless of their performance on the standardized language task (area under the ROC curve = .863, confidence interval = .736-.991). Results support further study of the diagnostic utility of narrative analysis using discourse level measures of elaboration and children's strategic use of reference.

  5. The Adam Walsh Act: An Examination of Sex Offender Risk Classification Systems.

    PubMed

    Zgoba, Kristen M; Miner, Michael; Levenson, Jill; Knight, Raymond; Letourneau, Elizabeth; Thornton, David

    2016-12-01

    This study was designed to compare the Adam Walsh Act (AWA) classification tiers with actuarial risk assessment instruments and existing state classification schemes in their respective abilities to identify sex offenders at high risk to re-offend. Data from 1,789 adult sex offenders released from prison in four states were collected (Minnesota, New Jersey, Florida, and South Carolina). On average, the sexual recidivism rate was approximately 5% at 5 years and 10% at 10 years. AWA Tier 2 offenders had higher Static-99R scores and higher recidivism rates than Tier 3 offenders, and in Florida, these inverse correlations were statistically significant. Actuarial measures and existing state tier systems, in contrast, did a better job of identifying high-risk offenders and recidivists. As well, we examined the distribution of risk assessment scores within and across tier categories, finding that a majority of sex offenders fall into AWA Tier 3, but more than half score low or moderately low on the Static-99R. The results indicate that the AWA sex offender classification scheme is a poor indicator of relative risk and is likely to result in a system that is less effective in protecting the public than those currently implemented in the states studied. © The Author(s) 2015.

  6. [The application of Delphi method in improving the score table for the hygienic quantifying and classification of hotels].

    PubMed

    Wang, Zi-yun; Liu, Yong-quan; Wang, Hong-bo; Zheng, Yang; Wu, Qi; Yang, Xia; Wu, Yong-wei; Zhao, Yi-ming

    2009-04-01

    By means of Delphi method and expert panel consultations, to choose suitable indicators and improve the score table for classifying the hygienic condition of hotels so that it can be widely used at nationwide. A two-round Delphi consultation was held to choose suitable indicators among 78 experts from 18 provinces, municipalities and autonomous regions. The suitable indicators were selected according to the importance recognized by experts. The average length of service in public health of the experts was (21.08 +/- 5.78) years and the average coefficient of experts' authorities C(r) was 0.89 +/- 0.07. The response rates of the two-round consultation were 98.72% (77/78) and 100.00% (77/77). The average feedback time were (8.49 +/- 4.48) d, (5.86 +/- 2.28) d, and the difference between two rounds was statistically significant (t = 4.60, P < 0.01). Kendall's coefficient were 0.26 (chi(2) = 723.63, P < 0.01), 0.32 (chi(2) = 635.65, P < 0.01) and opinions among experts became consistent. The score table for the hygienic quantifying and classification of hotels was composed of three first-class indicators (hygienic management, hygienic facilities and hygienic practices) and 36 second-class indicators. The weight coefficients of the three first-class indicators were 0.35, 0.34, 0.31. Delphi method might be used in a large-scale consultation among experts and be propitious to improve the score table for the hygienic quantifying and classification.

  7. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  8. [Endoprosthesis failure in the ankle joint : Histopathological diagnostics and classification].

    PubMed

    Müller, S; Walther, M; Röser, A; Krenn, V

    2017-03-01

    Endoprostheses of the ankle joint show higher revision rates of 3.29 revisions per 100 component years. The aims of this study were the application and modification of the consensus classification of the synovia-like interface membrane (SLIM) for periprosthetic failure of the ankle joint, the etiological clarification of periprosthetic pseudocysts and a detailed measurement of proliferative activity (Ki67) in the region of osteolysis. Tissue samples from 159 patients were examined according to the criteria of the standardized consensus classification. Of these, 117 cases were derived from periprosthetic membranes of the ankle. The control group included 42 tissue specimens from the hip and knee joints. Particle identification and characterization were carried out using the particle algorithm. An immunohistochemical examination with Ki67 proliferation was performed in all cases of ankle pseudocysts and 19 control cases. The consensus classification of SLIM is transferrable to endoprosthetic failure of the ankle joint. Periprosthetic pseudocysts with the histopathological characteristics of the appropriate SLIM subtype were detectable in 39 cases of ankle joint endoprostheses (33.3%). The mean value of the Ki67 index was 14% and showed an increased proliferation rate in periprosthetic pseudocysts of the ankle (p-value 0.02037). In periprosthetic pseudocysts an above average higher detection rate of type 1 SLIM induced by abrasion (51.3%) with an increased Ki67 proliferation fraction (p-value 0.02037) was found, which can be interpreted as local destructive intraosseus synovialitis. This can be the reason for formation of pseudocystic osteolysis caused by high mechanical stress in ankle endoprostheses. A simplified diagnostic classification scoring system of dysfunctional endoprostheses of the ankle is proposed for collation of periprosthetic pseudocysts, ossifications and the Ki67 proliferation fraction.

  9. Towards a ternary NIRS-BCI: single-trial classification of verbal fluency task, Stroop task and unconstrained rest

    NASA Astrophysics Data System (ADS)

    Schudlo, Larissa C.; Chau, Tom

    2015-12-01

    Objective. The majority of near-infrared spectroscopy (NIRS) brain-computer interface (BCI) studies have investigated binary classification problems. Limited work has considered differentiation of more than two mental states, or multi-class differentiation of higher-level cognitive tasks using measurements outside of the anterior prefrontal cortex. Improvements in accuracies are needed to deliver effective communication with a multi-class NIRS system. We investigated the feasibility of a ternary NIRS-BCI that supports mental states corresponding to verbal fluency task (VFT) performance, Stroop task performance, and unconstrained rest using prefrontal and parietal measurements. Approach. Prefrontal and parietal NIRS signals were acquired from 11 able-bodied adults during rest and performance of the VFT or Stroop task. Classification was performed offline using bagging with a linear discriminant base classifier trained on a 10 dimensional feature set. Main results. VFT, Stroop task and rest were classified at an average accuracy of 71.7% ± 7.9%. The ternary classification system provided a statistically significant improvement in information transfer rate relative to a binary system controlled by either mental task (0.87 ± 0.35 bits/min versus 0.73 ± 0.24 bits/min). Significance. These results suggest that effective communication can be achieved with a ternary NIRS-BCI that supports VFT, Stroop task and rest via measurements from the frontal and parietal cortices. Further development of such a system is warranted. Accurate ternary classification can enhance communication rates offered by NIRS-BCIs, improving the practicality of this technology.

  10. Hydrologic classification of rivers based on cluster analysis of dimensionless hydrologic signatures: Applications for environmental instream flows

    NASA Astrophysics Data System (ADS)

    Praskievicz, S. J.; Luo, C.

    2017-12-01

    Classification of rivers is useful for a variety of purposes, such as generating and testing hypotheses about watershed controls on hydrology, predicting hydrologic variables for ungaged rivers, and setting goals for river management. In this research, we present a bottom-up (based on machine learning) river classification designed to investigate the underlying physical processes governing rivers' hydrologic regimes. The classification was developed for the entire state of Alabama, based on 248 United States Geological Survey (USGS) stream gages that met criteria for length and completeness of records. Five dimensionless hydrologic signatures were derived for each gage: slope of the flow duration curve (indicator of flow variability), baseflow index (ratio of baseflow to average streamflow), rising limb density (number of rising limbs per unit time), runoff ratio (ratio of long-term average streamflow to long-term average precipitation), and streamflow elasticity (sensitivity of streamflow to precipitation). We used a Bayesian clustering algorithm to classify the gages, based on the five hydrologic signatures, into distinct hydrologic regimes. We then used classification and regression trees (CART) to predict each gaged river's membership in different hydrologic regimes based on climatic and watershed variables. Using existing geospatial data, we applied the CART analysis to classify ungaged streams in Alabama, with the National Hydrography Dataset Plus (NHDPlus) catchment (average area 3 km2) as the unit of classification. The results of the classification can be used for meeting management and conservation objectives in Alabama, such as developing statewide standards for environmental instream flows. Such hydrologic classification approaches are promising for contributing to process-based understanding of river systems.

  11. The Long-Term Effectiveness of Reading Recovery and the Cost-Efficiency of Reading Recovery Relative to the Learning Disabled Classification Rate

    ERIC Educational Resources Information Center

    Galluzzo, Charles A.

    2010-01-01

    There is a great deal of research supporting Reading Recovery as a successful reading intervention program that assists below level first graders readers in closing the gap in reading at the same level of their average peers. There is a lack of research that analyses the cost-effectiveness of the Reading Recovery program compared to the cost in…

  12. Grouping patients for masseter muscle genotype-phenotype studies.

    PubMed

    Moawad, Hadwah Abdelmatloub; Sinanan, Andrea C M; Lewis, Mark P; Hunt, Nigel P

    2012-03-01

    To use various facial classifications, including either/both vertical and horizontal facial criteria, to assess their effects on the interpretation of masseter muscle (MM) gene expression. Fresh MM biopsies were obtained from 29 patients (age, 16-36 years) with various facial phenotypes. Based on clinical and cephalometric analysis, patients were grouped using three different classifications: (1) basic vertical, (2) basic horizontal, and (3) combined vertical and horizontal. Gene expression levels of the myosin heavy chain genes MYH1, MYH2, MYH3, MYH6, MYH7, and MYH8 were recorded using quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and were related to the various classifications. The significance level for statistical analysis was set at P ≤ .05. Using classification 1, none of the MYH genes were found to be significantly different between long face (LF) patients and the average vertical group. Using classification 2, MYH3, MYH6, and MYH7 genes were found to be significantly upregulated in retrognathic patients compared with prognathic and average horizontal groups. Using classification 3, only the MYH7 gene was found to be significantly upregulated in retrognathic LF compared with prognathic LF, prognathic average vertical faces, and average vertical and horizontal groups. The use of basic vertical or basic horizontal facial classifications may not be sufficient for genetics-based studies of facial phenotypes. Prognathic and retrognathic facial phenotypes have different MM gene expressions; therefore, it is not recommended to combine them into one single group, even though they may have a similar vertical facial phenotype.

  13. General tensor discriminant analysis and gabor features for gait recognition.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2007-10-01

    The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine state of the art classification methods in gait recognition.

  14. A Temporal Mining Framework for Classifying Un-Evenly Spaced Clinical Data: An Approach for Building Effective Clinical Decision-Making System.

    PubMed

    Jane, Nancy Yesudhas; Nehemiah, Khanna Harichandran; Arputharaj, Kannan

    2016-01-01

    Clinical time-series data acquired from electronic health records (EHR) are liable to temporal complexities such as irregular observations, missing values and time constrained attributes that make the knowledge discovery process challenging. This paper presents a temporal rough set induced neuro-fuzzy (TRiNF) mining framework that handles these complexities and builds an effective clinical decision-making system. TRiNF provides two functionalities namely temporal data acquisition (TDA) and temporal classification. In TDA, a time-series forecasting model is constructed by adopting an improved double exponential smoothing method. The forecasting model is used in missing value imputation and temporal pattern extraction. The relevant attributes are selected using a temporal pattern based rough set approach. In temporal classification, a classification model is built with the selected attributes using a temporal pattern induced neuro-fuzzy classifier. For experimentation, this work uses two clinical time series dataset of hepatitis and thrombosis patients. The experimental result shows that with the proposed TRiNF framework, there is a significant reduction in the error rate, thereby obtaining the classification accuracy on an average of 92.59% for hepatitis and 91.69% for thrombosis dataset. The obtained classification results prove the efficiency of the proposed framework in terms of its improved classification accuracy.

  15. [What nurses with a bachelor of nursing degree know about the classification of arterial blood pressure and sequellae of arterial hypertension].

    PubMed

    Grabowska, Hanna; Narkiewicz, Krzysztof; Grabowski, Władysław; Grzegorczyk, Michał; Gaworska-Krzemińska, Aleksandra; Swietlik, Dariusz

    2009-01-01

    Arterial hypertension is among the most important risk factors of atherosclerosis and associated cardiovascular pathology with a prevalence rate estimated at 20-30% of the adult population. Nowadays, it is recommended to perform an individual assessment of cardiovascular risk in a patient and to determine the threshold value for arterial hypertension, even though blood pressure classification values according to the European Society of Hypertension and the European Society of Cardiology (ESH/ESC), as well as the Polish Society of Hypertension (PTNT) have remained unchanged. To determine what nurses with a Bachelor of Nursing degree know about the prevalence and classification of arterial blood pressure, as well as sequellae of arterial hypertension. This study was done in 116 qualified nurses (112 females, 4 males; age 21-50; seniority 0-29 years). The research period was from June 2007 to January 2008. The research tool was a questionnaire devised by the authors. We found that half (on the average) of those questioned have an up-to-date knowledge regarding classification of blood pressure and prevalence of arterial hypertension but just one out of three respondents (on the average) was able to describe its sequellae. Relatively less known among nurses with a Bachelor of Nursing degree were aspects of "white coat hypertension". Statistically significant differences regarding correct answers were noted depending on seniority (p = 0.002), place of work p < 0.001), or position (p < 0.001). There were no differences depending on age, place of residence, marital status, or form of postgraduate education of nurses with a Bachelor of Nursing degree. It is necessary to improve knowledge among students of nursing (BN degree) about current classification of blood pressure, as well as prevalence of arterial hypertension and its sequellae.

  16. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  17. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  18. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  19. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears

    USGS Publications Warehouse

    Laufenberg, Jared S.; Clark, Joseph D.; Chandler, Richard B.

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years () was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when , suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  20. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears.

    PubMed

    Laufenberg, Jared S; Clark, Joseph D; Chandler, Richard B

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years ([Formula: see text]) was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when [Formula: see text], suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  1. Depth Discrimination Using Rg-to-Sg Spectral Amplitude Ratios for Seismic Events in Utah Recorded at Local Distances

    DOE PAGES

    Tibi, Rigobert; Koper, Keith D.; Pankow, Kristine L.; ...

    2018-03-20

    Most of the commonly used seismic discrimination approaches are designed for regional data. Relatively little attention has focused on discriminants for local distances (< 200 km), the range at which the smallest events are recorded. We take advantage of the variety of seismic sources and the existence of a dense regional seismic network in the Utah region to evaluate amplitude ratio seismic discrimination at local distances. First, we explored phase-amplitude Pg-to-Sg ratios for multiple frequency bands to classify events in a dataset that comprises populations of single-shot surface explosions, shallow and deep ripple-fired mining blasts, mining-induced events, and tectonic earthquakes.more » We achieved a limited success. Then, for the same dataset, we combined the Pg-to-Sg phase-amplitude ratios with Sg-to-Rg spectral amplitude ratios in a multivariate quadratic discriminant function (QDF) approach. For two-category, pairwise classification, seven out of ten population pairs show misclassification rates of about 20% or less, with five pairs showing rates of about 10% or less. The approach performs best for the pair involving the populations of single-shot explosions and mining-induced events. By combining both Pg-to-Sg and Rg-to-Sg ratios in the multivariate QDFs, we are able to achieve an average improvement of about 4–14% in misclassification rates compared to Pg-to-Sg ratios alone. When all five event populations are considered simultaneously, as expected, the potential for misclassification increases and our QDF approach using both Pg-to-Sg and Rg-to-Sg ratios achieves an average success rate of about 74%, compared to the rate of about 86% for two-category, pairwise classification.« less

  2. Depth Discrimination Using Rg-to-Sg Spectral Amplitude Ratios for Seismic Events in Utah Recorded at Local Distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tibi, Rigobert; Koper, Keith D.; Pankow, Kristine L.

    Most of the commonly used seismic discrimination approaches are designed for regional data. Relatively little attention has focused on discriminants for local distances (< 200 km), the range at which the smallest events are recorded. We take advantage of the variety of seismic sources and the existence of a dense regional seismic network in the Utah region to evaluate amplitude ratio seismic discrimination at local distances. First, we explored phase-amplitude Pg-to-Sg ratios for multiple frequency bands to classify events in a dataset that comprises populations of single-shot surface explosions, shallow and deep ripple-fired mining blasts, mining-induced events, and tectonic earthquakes.more » We achieved a limited success. Then, for the same dataset, we combined the Pg-to-Sg phase-amplitude ratios with Sg-to-Rg spectral amplitude ratios in a multivariate quadratic discriminant function (QDF) approach. For two-category, pairwise classification, seven out of ten population pairs show misclassification rates of about 20% or less, with five pairs showing rates of about 10% or less. The approach performs best for the pair involving the populations of single-shot explosions and mining-induced events. By combining both Pg-to-Sg and Rg-to-Sg ratios in the multivariate QDFs, we are able to achieve an average improvement of about 4–14% in misclassification rates compared to Pg-to-Sg ratios alone. When all five event populations are considered simultaneously, as expected, the potential for misclassification increases and our QDF approach using both Pg-to-Sg and Rg-to-Sg ratios achieves an average success rate of about 74%, compared to the rate of about 86% for two-category, pairwise classification.« less

  3. Power Allocation Based on Data Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Houlian; Zhou, Gongbo

    2017-01-01

    Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy. PMID:28498346

  4. Distinguishing body mass and activity level from the lower limb: can entheses diagnose obesity?

    PubMed

    Godde, Kanya; Taylor, Rebecca Wilson

    2013-03-10

    The ability to estimate body size from the skeleton has broad applications, but is especially important to the forensic community when identifying unknown skeletal remains. This research investigates the utility of using entheses/muscle skeletal markers of the lower limb to estimate body size and to classify individuals into average, obese, and active categories, while using a biomechanical approach to interpret the results. Eighteen muscle attachment sites of the lower limb, known to be involved in the sit-to-stand transition, were scored for robusticity and stress in 105 white males (aged 31-81 years) from the William M. Bass Donated Skeletal Collection. Both logistic regression and log linear models were applied to the data to (1) test the utility of entheses as an indicator of body weight and activity level, and (2) to generate classification percentages that speak to the accuracy of the method. Thirteen robusticity scores differed significantly between the groups, but classification percentages were only slightly greater than chance. However, clear differences could be seen between the average and obese and the average and active groups. Stress scores showed no value in discriminating between groups. These results were interpreted in relation to biomechanical forces at the microscopic and macroscopic levels. Even though robusticity alone is not able to classify individuals well, its significance may show greater value when incorporated into a model that has multiple skeletal indicators. Further research needs to evaluate a larger sample and incorporate several lines of evidence to improve classification rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Land use in the Paraiba Valley through remotely sensed data. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.

    1980-01-01

    A methodology for land use survey was developed and land use modification rates were determined using LANDSAT imagery of the Paraiba Valley (state of Sao Paulo). Both visual and automatic interpretation methods were employed to analyze seven land use classes: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation and natural vegetation. By means of visual interpretation, little spectral differences are observed among those classes. The automatic classification of LANDSAT MSS data using maximum likelihood algorithm shows a 39% average error of omission and a 3.4% error of inclusion for the seven classes. The complexity of land uses in the study area, the large spectral variations of analyzed classes, and the low resolution of LANDSAT data influenced the classification results.

  6. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2016-05-01

    case of cognitive radio applications. Modulation classification is part of a broader problem known as blind or uncooperative demodulation the goal of...Introduction 2 2.1 Modulation Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Research Objectives...6 3 Modulation Classification Methods 7 3.0.1 Ad Hoc

  7. Hyperspectral image segmentation using a cooperative nonparametric approach

    NASA Astrophysics Data System (ADS)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  8. Predicting alpine headwater stream intermittency: a case study in the northern Rocky Mountains

    USGS Publications Warehouse

    Sando, Thomas R.; Blasch, Kyle W.

    2015-01-01

    This investigation used climatic, geological, and environmental data coupled with observational stream intermittency data to predict alpine headwater stream intermittency. Prediction was made using a random forest classification model. Results showed that the most important variables in the prediction model were snowpack persistence, represented by average snow extent from March through July, mean annual mean monthly minimum temperature, and surface geology types. For stream catchments with intermittent headwater streams, snowpack, on average, persisted until early June, whereas for stream catchments with perennial headwater streams, snowpack, on average, persisted until early July. Additionally, on average, stream catchments with intermittent headwater streams were about 0.7 °C warmer than stream catchments with perennial headwater streams. Finally, headwater stream catchments primarily underlain by coarse, permeable sediment are significantly more likely to have intermittent headwater streams than those primarily underlain by impermeable bedrock. Comparison of the predicted streamflow classification with observed stream status indicated a four percent classification error for first-order streams and a 21 percent classification error for all stream orders in the study area.

  9. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  10. [Research of Identify Spatial Object Using Spectrum Analysis Technique].

    PubMed

    Song, Wei; Feng, Shi-qi; Shi, Jing; Xu, Rong; Wang, Gong-chang; Li, Bin-yu; Liu, Yu; Li, Shuang; Cao Rui; Cai, Hong-xing; Zhang, Xi-he; Tan, Yong

    2015-06-01

    The high precision scattering spectrum of spatial fragment with the minimum brightness of 4.2 and the resolution of 0.5 nm has been observed using spectrum detection technology on the ground. The obvious differences for different types of objects are obtained by the normalizing and discrete rate analysis of the spectral data. Each of normalized multi-frame scattering spectral line shape for rocket debris is identical. However, that is different for lapsed satellites. The discrete rate of the single frame spectrum of normalized space debris for rocket debris ranges from 0.978% to 3.067%, and the difference of oscillation and average value is small. The discrete rate for lapsed satellites ranges from 3.118 4% to 19.472 7%, and the difference of oscillation and average value relatively large. The reason is that the composition of rocket debris is single, while that of the lapsed satellites is complex. Therefore, the spectrum detection technology on the ground can be used to the classification of the spatial fragment.

  11. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces.

    PubMed

    Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone

    2017-06-01

    The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.

  12. Crop classification modelling using remote sensing and environmental data in the Greater Platte River Basin, USA

    USGS Publications Warehouse

    Howard, Daniel M.; Wylie, Bruce K.; Tieszen, Larry L.

    2012-01-01

    With an ever expanding population, potential climate variability and an increasing demand for agriculture-based alternative fuels, accurate agricultural land-cover classification for specific crops and their spatial distributions are becoming critical to researchers, policymakers, land managers and farmers. It is important to ensure the sustainability of these and other land uses and to quantify the net impacts that certain management practices have on the environment. Although other quality crop classification products are often available, temporal and spatial coverage gaps can create complications for certain regional or time-specific applications. Our goal was to develop a model capable of classifying major crops in the Greater Platte River Basin (GPRB) for the post-2000 era to supplement existing crop classification products. This study identifies annual spatial distributions and area totals of corn, soybeans, wheat and other crops across the GPRB from 2000 to 2009. We developed a regression tree classification model based on 2.5 million training data points derived from the National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL) in relation to a variety of other relevant input environmental variables. The primary input variables included the weekly 250 m US Geological Survey Earth Observing System Moderate Resolution Imaging Spectroradiometer normalized differential vegetation index, average long-term growing season temperature, average long-term growing season precipitation and yearly start of growing season. An overall model accuracy rating of 78% was achieved for a test sample of roughly 215 000 independent points that were withheld from model training. Ten 250 m resolution annual crop classification maps were produced and evaluated for the GPRB region, one for each year from 2000 to 2009. In addition to the model accuracy assessment, our validation focused on spatial distribution and county-level crop area totals in comparison with the NASS CDL and county statistics from the US Department of Agriculture (USDA) Census of Agriculture. The results showed that our model produced crop classification maps that closely resembled the spatial distribution trends observed in the NASS CDL and exhibited a close linear agreement with county-by-county crop area totals from USDA census data (R 2 = 0.90).

  13. Neural network classification of myoelectric signal for prosthesis control.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1991-12-01

    An alternate approach to deriving control for multidegree of freedom prosthetic arms is considered. By analyzing a single-channel myoelectric signal (MES), we can extract information that can be used to identify different contraction patterns in the upper arm. These contraction patterns are generated by subjects without previous training and are naturally associated with specific functions. Using a set of normalized MES spectral features, we can identify contraction patterns for four arm functions, specifically extension and flexion of the elbow and pronation and supination of the forearm. Performing identification independent of signal power is advantageous because this can then be used as a means for deriving proportional rate control for a prosthesis. An artificial neural network implementation is applied in the classification task. By using three single-layer perceptron networks, the MES is classified, with the spectral representations as input features. Trials performed on five subjects with normal limbs resulted in an average classification performance level of 85% for the four functions. Copyright © 1991. Published by Elsevier Ltd.

  14. Low-resolution expression recognition based on central oblique average CS-LBP with adaptive threshold

    NASA Astrophysics Data System (ADS)

    Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong

    2017-11-01

    In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.

  15. [Nursing outcomes for ineffective breathing patterns and impaired spontaneous ventilation in intensive care].

    PubMed

    do Canto, Débora Francisco; Almeida, Miriam de Abreu

    2013-12-01

    This study aimed to validate the results of Nursing selected from the link NANDA-I-NOC (Nursing Outcomes Classification--NANDA--International) for diagnosis Ineffective Breathing Pattern and Impaired Spontaneous Ventilation in adult intensive care unit. This is a content validation study conducted in a university hospital in southern Brazil with 15 expert nurses with clinical experience and knowledge of the ratings. The instruments contained five-point Likert scales to rate the importance of each outcome (1st step) and indicator (Step 2) for the diagnoses studied. We calculated weighted averages for each outcome/indicator, considering) 1 = 0. 2 = 0.25, 3 = 0.50 4 = 0.75 and 5 = 1. The outcomes suggested by the NOC with averages above 0.8 were considered validated as well as the indicators. The results Respiratory State--airway permeability (Ineffective Breathing Patterns) and 11 indicators, and Response to mechanical ventilation: adult (Impaired Spontaneous Ventilation) with 26 indicators were validated.

  16. Invasive Cancer Incidence, 2004-2013, and Deaths, 2006-2015, in Nonmetropolitan and Metropolitan Counties - United States.

    PubMed

    Henley, S Jane; Anderson, Robert N; Thomas, Cheryll C; Massetti, Greta M; Peaker, Brandy; Richardson, Lisa C

    2017-07-07

    Previous reports have shown that persons living in nonmetropolitan (rural or urban) areas in the United States have higher death rates from all cancers combined than persons living in metropolitan areas. Disparities might vary by cancer type and between occurrence and death from the disease. This report provides a comprehensive assessment of cancer incidence and deaths by cancer type in nonmetropolitan and metropolitan counties. 2004-2015. Cancer incidence data from CDC's National Program of Cancer Registries and the National Cancer Institute's Surveillance, Epidemiology, and End Results program were used to calculate average annual age-adjusted incidence rates for 2009-2013 and trends in annual age-adjusted incidence rates for 2004-2013. Cancer mortality data from the National Vital Statistics System were used to calculate average annual age-adjusted death rates for 2011-2015 and trends in annual age-adjusted death rates for 2006-2015. For 5-year average annual rates, counties were classified into four categories (nonmetropolitan rural, nonmetropolitan urban, metropolitan with population <1 million, and metropolitan with population ≥1 million). For the trend analysis, which used annual rates, these categories were combined into two categories (nonmetropolitan and metropolitan). Rates by county classification were examined by sex, age, race/ethnicity, U.S. census region, and cancer site. Trends in rates were examined by county classification and cancer site. During the most recent 5-year period for which data were available, nonmetropolitan rural areas had lower average annual age-adjusted cancer incidence rates for all anatomic cancer sites combined but higher death rates than metropolitan areas. During 2006-2015, the annual age-adjusted death rates for all cancer sites combined decreased at a slower pace in nonmetropolitan areas (-1.0% per year) than in metropolitan areas (-1.6% per year), increasing the differences in these rates. In contrast, annual age-adjusted incidence rates for all cancer sites combined decreased approximately 1% per year during 2004-2013 both in nonmetropolitan and metropolitan counties. This report provides the first comprehensive description of cancer incidence and mortality in nonmetropolitan and metropolitan counties in the United States. Nonmetropolitan rural counties had higher incidence of and deaths from several cancers related to tobacco use and cancers that can be prevented by screening. Differences between nonmetropolitan and metropolitan counties in cancer incidence might reflect differences in risk factors such as cigarette smoking, obesity, and physical inactivity, whereas differences in cancer death rates might reflect disparities in access to health care and timely diagnosis and treatment. Many cancer cases and deaths could be prevented, and public health programs can use evidence-based strategies from the U.S. Preventive Services Task Force and Advisory Committee for Immunization Practices (ACIP) to support cancer prevention and control. The U.S. Preventive Services Task Force recommends population-based screening for colorectal, female breast, and cervical cancers among adults at average risk for these cancers and for lung cancer among adults at high risk; screening adults for tobacco use and excessive alcohol use, offering counseling and interventions as needed; and using low-dose aspirin to prevent colorectal cancer among adults considered to be at high risk for cardiovascular disease based on specific criteria. ACIP recommends vaccination against cancer-related infectious diseases including human papillomavirus and hepatitis B virus. The Guide to Community Preventive Services describes program and policy interventions proven to increase cancer screening and vaccination rates and to prevent tobacco use, excessive alcohol use, obesity, and physical inactivity.

  17. Interlocking intramedullary nailing in distal tibial fractures.

    PubMed

    Tyllianakis, M; Megas, P; Giannikas, D; Lambiris, E

    2000-08-01

    This retrospective study examined the results of non-pilon fractures of the distal part of the tibia treated with interlocking intramedullary nailing. Seventy-three patients with equal numbers of fractures treated surgically between 1990 and 1998 were reviewed. Mean patient age was 39.8 years, and follow-up averaged 34.2 months. The AO fracture classification system was used. Concomitant fractures of the lateral malleolus were fixed. All but three fractures achieved union within 4.2 months on average. Satisfactory or excellent results were obtained in 86.3% of patients. These results indicate interlocking intramedullary nailing is a reliable method of treatment for these fractures and is characterized by high rates of union and a low incidence of complications.

  18. Clinical significance of erythropoietin receptor expression in oral squamous cell carcinoma

    PubMed Central

    2012-01-01

    Background Hypoxic tumors are refractory to radiation and chemotherapy. High expression of biomarkers related to hypoxia in head and neck cancer is associated with a poorer prognosis. The present study aimed to evaluate the clinicopathological significance of erythropoietin receptor (EPOR) expression in oral squamous cell carcinoma (OSCC). Methods The study included 256 patients who underwent primary surgical resection between October 1996 and August 2005 for treatment of OSCC without previous radiotherapy and/or chemotherapy. Clinicopathological information including gender, age, T classification, N classification, and TNM stage was obtained from clinical records and pathology reports. The mRNA and protein expression levels of EPOR in OSCC specimens were evaluated by Q-RT-PCR, Western blotting and immunohistochemistry assays. Results We found that EPOR were overexpressed in OSCC tissues. The study included 17 women and 239 men with an average age of 50.9 years (range, 26–87 years). The mean follow-up period was 67 months (range, 2–171 months). High EPOR expression was significantly correlated with advanced T classification (p < 0.001), advanced TNM stage (p < 0.001), and positive N classification (p = 0.001). Furthermore, the univariate analysis revealed that patients with high tumor EPOR expression had a lower 5-year overall survival rate (p = 0.0011) and 5-year disease-specific survival rate (p = 0.0017) than patients who had low tumor levels of EPOR. However, the multivariate analysis using Cox’s regression model revealed that only the T and N classifications were independent prognostic factors for the 5-year overall survival and 5-year disease-specific survival rates. Conclusions High EPOR expression in OSCC is associated with an aggressive tumor behavior and poorer prognosis in the univariate analysis among patients with OSCC. Thus, EPOR expression may serve as a treatment target for OSCC in the future. PMID:22639817

  19. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  20. Brightness-preserving fuzzy contrast enhancement scheme for the detection and classification of diabetic retinopathy disease.

    PubMed

    Datta, Niladri Sekhar; Dutta, Himadri Sekhar; Majumder, Koushik

    2016-01-01

    The contrast enhancement of retinal image plays a vital role for the detection of microaneurysms (MAs), which are an early sign of diabetic retinopathy disease. A retinal image contrast enhancement method has been presented to improve the MA detection technique. The success rate on low-contrast noisy retinal image analysis shows the importance of the proposed method. Overall, 587 retinal input images are tested for performance analysis. The average sensitivity and specificity are obtained as 95.94% and 99.21%, respectively. The area under curve is found as 0.932 for the receiver operating characteristics analysis. The classifications of diabetic retinopathy disease are also performed here. The experimental results show that the overall MA detection method performs better than the current state-of-the-art MA detection algorithms.

  1. Evaluating the potential for site-specific modification of LiDAR DEM derivatives to improve environmental planning-scale wetland identification using Random Forest classification

    NASA Astrophysics Data System (ADS)

    O'Neil, Gina L.; Goodall, Jonathan L.; Watson, Layne T.

    2018-04-01

    Wetlands are important ecosystems that provide many ecological benefits, and their quality and presence are protected by federal regulations. These regulations require wetland delineations, which can be costly and time-consuming to perform. Computer models can assist in this process, but lack the accuracy necessary for environmental planning-scale wetland identification. In this study, the potential for improvement of wetland identification models through modification of digital elevation model (DEM) derivatives, derived from high-resolution and increasingly available light detection and ranging (LiDAR) data, at a scale necessary for small-scale wetland delineations is evaluated. A novel approach of flow convergence modelling is presented where Topographic Wetness Index (TWI), curvature, and Cartographic Depth-to-Water index (DTW), are modified to better distinguish wetland from upland areas, combined with ancillary soil data, and used in a Random Forest classification. This approach is applied to four study sites in Virginia, implemented as an ArcGIS model. The model resulted in significant improvement in average wetland accuracy compared to the commonly used National Wetland Inventory (84.9% vs. 32.1%), at the expense of a moderately lower average non-wetland accuracy (85.6% vs. 98.0%) and average overall accuracy (85.6% vs. 92.0%). From this, we concluded that modifying TWI, curvature, and DTW provides more robust wetland and non-wetland signatures to the models by improving accuracy rates compared to classifications using the original indices. The resulting ArcGIS model is a general tool able to modify these local LiDAR DEM derivatives based on site characteristics to identify wetlands at a high resolution.

  2. Contextual convolutional neural networks for lung nodule classification using Gaussian-weighted average image patches

    NASA Astrophysics Data System (ADS)

    Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.

  3. Hierarchic Agglomerative Clustering Methods for Automatic Document Classification.

    ERIC Educational Resources Information Center

    Griffiths, Alan; And Others

    1984-01-01

    Considers classifications produced by application of single linkage, complete linkage, group average, and word clustering methods to Keen and Cranfield document test collections, and studies structure of hierarchies produced, extent to which methods distort input similarity matrices during classification generation, and retrieval effectiveness…

  4. Cannabis Mobile Apps: A Content Analysis.

    PubMed

    Ramo, Danielle E; Popova, Lucy; Grana, Rachel; Zhao, Shirley; Chavez, Kathryn

    2015-08-12

    Mobile technology is pervasive and widely used to obtain information about drugs such as cannabis, especially in a climate of rapidly changing cannabis policy; yet the content of available cannabis apps is largely unknown. Understanding the resources available to those searching for cannabis apps will clarify how this technology is being used to reflect and influence cannabis use behavior. We investigated the content of 59 cannabis-related mobile apps for Apple and Android devices as of November 26, 2014. The Apple and Google Play app stores were searched using the terms "cannabis" and "marijuana." Three trained coders classified the top 20 apps for each term and each store, using a coding guide. Apps were examined for the presence of 20 content codes derived by the researchers. Total apps available for each search term were 124 for cannabis and 218 for marijuana in the Apple App Store, and 250 each for cannabis and marijuana on Google Play. The top 20 apps in each category in each store were coded for 59 independent apps (30 Apple, 29 Google Play). The three most common content areas were cannabis strain classification (33.9%), facts about cannabis (20.3%), and games (20.3%). In the Apple App Store, most apps were free (77%), all were rated "17+" years, and the average user rating was 3.9/5 stars. The most popular apps provided cannabis strain classifications (50%), dispensary information (27%), or general facts about cannabis (27%). Only one app (3%) provided information or resources related to cannabis abuse, addiction, or treatment. On Google Play, most apps were free (93%), rated "high maturity" (79%), and the average user rating was 4.1/5. The most popular app types offered games (28%), phone utilities (eg, wallpaper, clock; 21%) and cannabis food recipes (21%); no apps addressed abuse, addiction, or treatment. Cannabis apps are generally free and highly rated. Apps were most often informational (facts, strain classification), or recreational (games), likely reflecting and influencing the growing acceptance of cannabis for medical and recreational purposes. Apps addressing addiction or cessation were underrepresented in the most popular cannabis mobile apps. Differences among apps for Apple and Android platforms likely reflect differences in the population of users, developer choice, and platform regulations.

  5. Cannabis Mobile Apps: A Content Analysis

    PubMed Central

    Popova, Lucy; Grana, Rachel; Zhao, Shirley; Chavez, Kathryn

    2015-01-01

    Background Mobile technology is pervasive and widely used to obtain information about drugs such as cannabis, especially in a climate of rapidly changing cannabis policy; yet the content of available cannabis apps is largely unknown. Understanding the resources available to those searching for cannabis apps will clarify how this technology is being used to reflect and influence cannabis use behavior. Objective We investigated the content of 59 cannabis-related mobile apps for Apple and Android devices as of November 26, 2014. Methods The Apple and Google Play app stores were searched using the terms “cannabis” and “marijuana.” Three trained coders classified the top 20 apps for each term and each store, using a coding guide. Apps were examined for the presence of 20 content codes derived by the researchers. Results Total apps available for each search term were 124 for cannabis and 218 for marijuana in the Apple App Store, and 250 each for cannabis and marijuana on Google Play. The top 20 apps in each category in each store were coded for 59 independent apps (30 Apple, 29 Google Play). The three most common content areas were cannabis strain classification (33.9%), facts about cannabis (20.3%), and games (20.3%). In the Apple App Store, most apps were free (77%), all were rated “17+” years, and the average user rating was 3.9/5 stars. The most popular apps provided cannabis strain classifications (50%), dispensary information (27%), or general facts about cannabis (27%). Only one app (3%) provided information or resources related to cannabis abuse, addiction, or treatment. On Google Play, most apps were free (93%), rated “high maturity” (79%), and the average user rating was 4.1/5. The most popular app types offered games (28%), phone utilities (eg, wallpaper, clock; 21%) and cannabis food recipes (21%); no apps addressed abuse, addiction, or treatment. Conclusions Cannabis apps are generally free and highly rated. Apps were most often informational (facts, strain classification), or recreational (games), likely reflecting and influencing the growing acceptance of cannabis for medical and recreational purposes. Apps addressing addiction or cessation were underrepresented in the most popular cannabis mobile apps. Differences among apps for Apple and Android platforms likely reflect differences in the population of users, developer choice, and platform regulations. PMID:26268634

  6. Toward attenuating the impact of arm positions on electromyography pattern-recognition based motion classification in transradial amputees

    PubMed Central

    2012-01-01

    Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049

  7. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  8. An approach to emotion recognition in single-channel EEG signals: a mother child interaction

    NASA Astrophysics Data System (ADS)

    Gómez, A.; Quintero, L.; López, N.; Castro, J.

    2016-04-01

    In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology. Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains. Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness.

  9. Use of the UPOINT Classification in Turkish Chronic Prostatitis or Chronic Pelvic Pain Syndrome Patients.

    PubMed

    Arda, Ersan; Cakiroglu, Basri; Tas, Tuncay; Ekici, Sinan; Uyanik, Bekir Sami

    2016-11-01

    To determine the positive subdomain numbers and distribution of the UPOINT classification in chronic prostatitis and to compare the erectile dysfunction (ED) pattern. From 2008 to 2013, 839 patients with symptomatic chronic prostatitis or chronic pelvic pain syndrome were included in this study. The correlation between UPOINT domains and National Institutes of Health chronic prostatitis symptom index (NIH-CPSI) total score, subscores, and the 5-item International Index of Erectile Function scores were evaluated retrospectively. The mean patient age was calculated as 37.7 ± 7.4 (range 21-65). The average total NIH-CPSI score was determined as 9.07 (range 1-40) and the average positive UPOINT subdomain number was determined as 2.87 ± 0.32 (range 1-6). Subdomain patient numbers and rates were calculated as 529 urinary (63%), 462 psychosocial (55%), 382 organ specific (45%), 290 infection (34%), 288 neurological or systemic (34%), and 418 tenderness (skeletal muscle) (50%), respectively. It was determined that ED, determining the subdomain of sexual dysfunction in patients, was positive in a total of 326 (39.9%) patients, with 220 patients having mild (26.2%), 76 mild to moderate (9.1%), 19 moderate (2.3%), and 5 with severe (0.6%) ED. A statistically significant correlation was not determined between the 5-item International Index of Erectile Function score and UPOINT subdomain number and NIH-CPSI score. It has been determined that although there is a strong and significant correlation between UPOINT classification and NIH-CPSI score in Turkish patients with chronic prostatitis or chronic pelvic pain syndrome, the inclusion of ED as an independent subdomain to the UPOINT classification is not statistically significant. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Attributed graph distance measure for automatic detection of attention deficit hyperactive disordered subjects.

    PubMed

    Dey, Soumyabrata; Rao, A Ravishankar; Shah, Mubarak

    2014-01-01

    Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI) data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS) technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM) classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49%) and test data sets (73.55%). Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.

  11. Hearing handicap in patients with chronic kidney disease: a study of the different classifications of the degree of hearing loss.

    PubMed

    Costa, Klinger Vagner Teixeira da; Ferreira, Sonia Maria Soares; Menezes, Pedro de Lemos

    The association between hearing loss and chronic kidney disease and hemodialysis has been well documented. However, the classification used for the degree of loss may underestimate the actual diagnosis due to specific characteristics related to the most affected auditory frequencies. Furthermore, correlations of hearing loss and hemodialysis time with hearing handicap remain unknown in this population. To compare the results of Lloyd's and Kaplan's and The Bureau Internacional d'Audiophonologie classifications in chronic kidney disease patients, and to correlate the averages calculated by their formulas with hemodialysis time and the hearing handicap. This is an analytical, observational and cross-sectional study with 80 patients on hemodialysis. Tympanometry, speech audiometry, pure tone audiometry and interview of patients with hearing loss through Hearing Handicap Inventory for Adults. Cases were classified according to the degree of loss. The correlations of tone averages with hemodialysis time and the total scores of Hearing Handicap Inventory for Adults and its domains were verified. 86 ears (53.75%) had hearing loss in at least one of the tonal averages in 48 patients who responded to Hearing Handicap Inventory for Adults. The Bureau Internacional d'Audiophonologie classification identified a greater number of cases (n=52) with some degree of disability compared to Lloyd and Kaplan (n=16). In the group with hemodialysis time of at least 2 years, there was weak but statistically significant correlation of The Bureau Internacional d'Audiophonologie classification average with hemodialysis time (r=0.363). There were moderate correlations of average The Bureau Internacional d'Audiophonologie classification (r=0.510) and tritone 2 (r=0.470) with the total scores of Hearing Handicap Inventory for Adults and with its social domain. The Bureau Internacional d'Audiophonologie classification seems to be more appropriate than Lloyd's and Kaplan's for use in this population; its average showed correlations with hearing loss in patients with hemodialysis time≥2 years and it exhibited moderate levels of correlation with the total score of Hearing Handicap Inventory for Adults and its social domain (r=0.557 and r=0.512). Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  12. Workplace assaults on minority health and mental health care workers in Los Angeles.

    PubMed Central

    Sullivan, C; Yuan, C

    1995-01-01

    Workplace violence is becoming increasingly recognized as a serious problem in health care settings. All 628 workers' compensation assaults claimed by minority Los Angeles County health care workers from 1986 through 1990 were abstracted. Population-at-risk data from county personnel computer tapes provided denominators by age, sex, race, job classification, and type of facility. Rates varied by type of facility (rate ratio = 38 for psychiatric hospitals vs public health facilities) and varied by job, with inpatient nursing attendants having the highest rate for caregivers. Most assaults were committed by patients (86%), followed by coworkers (8%). The average cost of an assault ($4879) was relatively low but related to the costlier problem of work-related emotional illness. PMID:7604900

  13. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings.

    PubMed

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia

    2018-01-01

    EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings.

  14. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings

    PubMed Central

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens

    2018-01-01

    Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Discussion Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings. PMID:29492336

  15. Trends and Burden of Bronchiectasis-Associated Hospitalizations in the United States, 1993-2006

    PubMed Central

    Seitz, Amy E.; Olivier, Kenneth N.; Steiner, Claudia A.; Montes de Oca, Ruben; Holland, Steven M.

    2010-01-01

    Background: Current data on bronchiectasis prevalence, trends, and risk factors are lacking; such data are needed to estimate the burden of disease and for improved medical care and public health resource allocation. The objective of the present study was to estimate the trends and burden of bronchiectasis-associated hospitalizations in the United States. Methods: We extracted hospital discharge records containing International Classification of Diseases, 9th Revision, Clinical Modification codes for bronchiectasis (494, 494.0, and 494.1) as any discharge diagnosis from the State Inpatient Databases from the Agency for Healthcare Research and Quality. Discharge records were extracted for 12 states with complete and continuous reporting from 1993 to 2006. Results: The average annual age-adjusted hospitalization rate from 1993 to 2006 was 16.5 hospitalizations per 100,000 population. From 1993 to 2006, the age-adjusted rate increased significantly, with an average annual percentage increase of 2.4% among men and 3.0% among women. Women and persons aged > 60 years had the highest rate of bronchiectasis-associated hospitalizations. The median cost for inpatient care was 7,827 US dollars (USD) (range, 13-543,914 USD). Conclusions: The average annual age-adjusted rate of bronchiectasis-associated hospitalizations increased from 1993 to 2006. This study furthers the understanding of the impact of bronchiectasis and demonstrates the need for further research to identify risk factors and reasons for the increasing burden. PMID:20435655

  16. A novel onset detection technique for brain-computer interfaces using sound-production related cognitive tasks in simulated-online system

    NASA Astrophysics Data System (ADS)

    Song, YoungJae; Sepulveda, Francisco

    2017-02-01

    Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.

  17. A Single-Channel EOG-Based Speller.

    PubMed

    He, Shenghong; Li, Yuanqing

    2017-11-01

    Electrooculography (EOG) signals, which can be used to infer the intentions of a user based on eye movements, are widely used in human-computer interface (HCI) systems. Most existing EOG-based HCI systems incorporate a limited number of commands because they generally associate different commands with a few different types of eye movements, such as looking up, down, left, or right. This paper presents a novel single-channel EOG-based HCI that allows users to spell asynchronously by only blinking. Forty buttons corresponding to 40 characters displayed to the user via a graphical user interface are intensified in a random order. To select a button, the user must blink his/her eyes in synchrony as the target button is flashed. Two data processing procedures, specifically support vector machine (SVM) classification and waveform detection, are combined to detect eye blinks. During detection, we simultaneously feed the feature vectors extracted from the ongoing EOG signal into the SVM classification and waveform detection modules. Decisions are made based on the results of the SVM classification and waveform detection. Three online experiments were conducted with eight healthy subjects. We achieved an average accuracy of 94.4% and a response time of 4.14 s for selecting a character in synchronous mode, as well as an average accuracy of 93.43% and a false positive rate of 0.03/min in the idle state in asynchronous mode. The experimental results, therefore, demonstrated the effectiveness of this single-channel EOG-based speller.

  18. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    PubMed Central

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  19. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    PubMed

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  20. Ensemble of classifiers for confidence-rated classification of NDE signal

    NASA Astrophysics Data System (ADS)

    Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish

    2016-02-01

    Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.

  1. Classification of complex information: inference of co-occurring affective states from their expressions in speech.

    PubMed

    Sobol-Shikler, Tal; Robinson, Peter

    2010-07-01

    We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance.

  2. Is Nigeria losing its natural vegetation and landscape? Assessing the landuse-landcover change trajectories and effects in Onitsha using remote sensing and GIS

    NASA Astrophysics Data System (ADS)

    Nwaogu, Chukwudi; Okeke, Onyedikachi J.; Fadipe, Olusola O.; Bashiru, Kehinde A.; Pechanec, Vilém

    2017-12-01

    Onitsha is one of the largest commercial cities in Africa with its population growth rate increasing arithmetically for the past two decades. This situation has direct and indirect effects on the natural resources including vegetation and water. The study aimed at assessing land use-land cover (LULC) change and its effects on the vegetation and landscape from 1987 to 2015 using geoinformatics. Supervised and unsupervised classifications including maximum likelihood algorithm were performed using ENVI 4.7 and ArcGIS 10.1 versions. The LULC was classified into 7 classes: built-up areas (settlement), waterbody, thick vegetation, light vegetation, riparian vegetation, sand deposit (bare soil) and floodplain. The result revealed that all the three vegetation types decreased in areas throughout the study period while, settlement, sand deposit and floodplain areas have remarkable increase of about 100% in 2015 when compared with the total in 1987. Number of dominant plant species decreased continuously during the study. The overall classification accuracies in 1987, 2002 and 2015 was 90.7%, 92.9% and 95.5% respectively. The overall kappa coefficient of the image classification for 1987, 2002 and 2015 was 0.98, 0.93 and 0.96 respectively. In general, the average classification was above 90%, a proof that the classification was reliable and acceptable.

  3. HMM based automated wheelchair navigation using EOG traces in EEG

    NASA Astrophysics Data System (ADS)

    Aziz, Fayeem; Arof, Hamzah; Mokhtar, Norrima; Mubin, Marizan

    2014-10-01

    This paper presents a wheelchair navigation system based on a hidden Markov model (HMM), which we developed to assist those with restricted mobility. The semi-autonomous system is equipped with obstacle/collision avoidance sensors and it takes the electrooculography (EOG) signal traces from the user as commands to maneuver the wheelchair. The EOG traces originate from eyeball and eyelid movements and they are embedded in EEG signals collected from the scalp of the user at three different locations. Features extracted from the EOG traces are used to determine whether the eyes are open or closed, and whether the eyes are gazing to the right, center, or left. These features are utilized as inputs to a few support vector machine (SVM) classifiers, whose outputs are regarded as observations to an HMM. The HMM determines the state of the system and generates commands for navigating the wheelchair accordingly. The use of simple features and the implementation of a sliding window that captures important signatures in the EOG traces result in a fast execution time and high classification rates. The wheelchair is equipped with a proximity sensor and it can move forward and backward in three directions. The asynchronous system achieved an average classification rate of 98% when tested with online data while its average execution time was less than 1 s. It was also tested in a navigation experiment where all of the participants managed to complete the tasks successfully without collisions.

  4. HMM based automated wheelchair navigation using EOG traces in EEG.

    PubMed

    Aziz, Fayeem; Arof, Hamzah; Mokhtar, Norrima; Mubin, Marizan

    2014-10-01

    This paper presents a wheelchair navigation system based on a hidden Markov model (HMM), which we developed to assist those with restricted mobility. The semi-autonomous system is equipped with obstacle/collision avoidance sensors and it takes the electrooculography (EOG) signal traces from the user as commands to maneuver the wheelchair. The EOG traces originate from eyeball and eyelid movements and they are embedded in EEG signals collected from the scalp of the user at three different locations. Features extracted from the EOG traces are used to determine whether the eyes are open or closed, and whether the eyes are gazing to the right, center, or left. These features are utilized as inputs to a few support vector machine (SVM) classifiers, whose outputs are regarded as observations to an HMM. The HMM determines the state of the system and generates commands for navigating the wheelchair accordingly. The use of simple features and the implementation of a sliding window that captures important signatures in the EOG traces result in a fast execution time and high classification rates. The wheelchair is equipped with a proximity sensor and it can move forward and backward in three directions. The asynchronous system achieved an average classification rate of 98% when tested with online data while its average execution time was less than 1 s. It was also tested in a navigation experiment where all of the participants managed to complete the tasks successfully without collisions.

  5. Application of a single-flicker online SSVEP BCI for spatial navigation.

    PubMed

    Chen, Jingjing; Zhang, Dan; Engel, Andreas K; Gong, Qin; Maye, Alexander

    2017-01-01

    A promising approach for brain-computer interfaces (BCIs) employs the steady-state visual evoked potential (SSVEP) for extracting control information. Main advantages of these SSVEP BCIs are a simple and low-cost setup, little effort to adjust the system parameters to the user and comparatively high information transfer rates (ITR). However, traditional frequency-coded SSVEP BCIs require the user to gaze directly at the selected flicker stimulus, which is liable to cause fatigue or even photic epileptic seizures. The spatially coded SSVEP BCI we present in this article addresses this issue. It uses a single flicker stimulus that appears always in the extrafoveal field of view, yet it allows the user to control four control channels. We demonstrate the embedding of this novel SSVEP stimulation paradigm in the user interface of an online BCI for navigating a 2-dimensional computer game. Offline analysis of the training data reveals an average classification accuracy of 96.9±1.64%, corresponding to an information transfer rate of 30.1±1.8 bits/min. In online mode, the average classification accuracy reached 87.9±11.4%, which resulted in an ITR of 23.8±6.75 bits/min. We did not observe a strong relation between a subject's offline and online performance. Analysis of the online performance over time shows that users can reliably control the new BCI paradigm with stable performance over at least 30 minutes of continuous operation.

  6. Superpixel-based spectral classification for the detection of head and neck cancer with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chung, Hyunkoo; Lu, Guolan; Tian, Zhiqiang; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications. HSI acquires two dimensional images at various wavelengths. The combination of both spectral and spatial information provides quantitative information for cancer detection and diagnosis. This paper proposes using superpixels, principal component analysis (PCA), and support vector machine (SVM) to distinguish regions of tumor from healthy tissue. The classification method uses 2 principal components decomposed from hyperspectral images and obtains an average sensitivity of 93% and an average specificity of 85% for 11 mice. The hyperspectral imaging technology and classification method can have various applications in cancer research and management.

  7. Classification of collective behavior: a comparison of tracking and machine learning methods to study the effect of ambient light on fish shoaling.

    PubMed

    Butail, Sachit; Salerno, Philip; Bollt, Erik M; Porfiri, Maurizio

    2015-12-01

    Traditional approaches for the analysis of collective behavior entail digitizing the position of each individual, followed by evaluation of pertinent group observables, such as cohesion and polarization. Machine learning may enable considerable advancements in this area by affording the classification of these observables directly from images. While such methods have been successfully implemented in the classification of individual behavior, their potential in the study collective behavior is largely untested. In this paper, we compare three methods for the analysis of collective behavior: simple tracking (ST) without resolving occlusions, machine learning with real data (MLR), and machine learning with synthetic data (MLS). These methods are evaluated on videos recorded from an experiment studying the effect of ambient light on the shoaling tendency of Giant danios. In particular, we compute average nearest-neighbor distance (ANND) and polarization using the three methods and compare the values with manually-verified ground-truth data. To further assess possible dependence on sampling rate for computing ANND, the comparison is also performed at a low frame rate. Results show that while ST is the most accurate at higher frame rate for both ANND and polarization, at low frame rate for ANND there is no significant difference in accuracy between the three methods. In terms of computational speed, MLR and MLS take significantly less time to process an image, with MLS better addressing constraints related to generation of training data. Finally, all methods are able to successfully detect a significant difference in ANND as the ambient light intensity is varied irrespective of the direction of intensity change.

  8. An embedded implementation based on adaptive filter bank for brain-computer interface systems.

    PubMed

    Belwafi, Kais; Romain, Olivier; Gannouni, Sofien; Ghaffari, Fakhreddine; Djemal, Ridha; Ouni, Bouraoui

    2018-07-15

    Brain-computer interface (BCI) is a new communication pathway for users with neurological deficiencies. The implementation of a BCI system requires complex electroencephalography (EEG) signal processing including filtering, feature extraction and classification algorithms. Most of current BCI systems are implemented on personal computers. Therefore, there is a great interest in implementing BCI on embedded platforms to meet system specifications in terms of time response, cost effectiveness, power consumption, and accuracy. This article presents an embedded-BCI (EBCI) system based on a Stratix-IV field programmable gate array. The proposed system relays on the weighted overlap-add (WOLA) algorithm to perform dynamic filtering of EEG-signals by analyzing the event-related desynchronization/synchronization (ERD/ERS). The EEG-signals are classified, using the linear discriminant analysis algorithm, based on their spatial features. The proposed system performs fast classification within a time delay of 0.430 s/trial, achieving an average accuracy of 76.80% according to an offline approach and 80.25% using our own recording. The estimated power consumption of the prototype is approximately 0.7 W. Results show that the proposed EBCI system reduces the overall classification error rate for the three datasets of the BCI-competition by 5% compared to other similar implementations. Moreover, experiment shows that the proposed system maintains a high accuracy rate with a short processing time, a low power consumption, and a low cost. Performing dynamic filtering of EEG-signals using WOLA increases the recognition rate of ERD/ERS patterns of motor imagery brain activity. This approach allows to develop a complete prototype of a EBCI system that achieves excellent accuracy rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    PubMed

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  10. Selecting a Classification Ensemble and Detecting Process Drift in an Evolving Data Stream

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heredia-Langner, Alejandro; Rodriguez, Luke R.; Lin, Andy

    2015-09-30

    We characterize the commercial behavior of a group of companies in a common line of business using a small ensemble of classifiers on a stream of records containing commercial activity information. This approach is able to effectively find a subset of classifiers that can be used to predict company labels with reasonable accuracy. Performance of the ensemble, its error rate under stable conditions, can be characterized using an exponentially weighted moving average (EWMA) statistic. The behavior of the EWMA statistic can be used to monitor a record stream from the commercial network and determine when significant changes have occurred. Resultsmore » indicate that larger classification ensembles may not necessarily be optimal, pointing to the need to search the combinatorial classifier space in a systematic way. Results also show that current and past performance of an ensemble can be used to detect when statistically significant changes in the activity of the network have occurred. The dataset used in this work contains tens of thousands of high level commercial activity records with continuous and categorical variables and hundreds of labels, making classification challenging.« less

  11. Recognition of hand movements in a trans-radial amputated subject by sEMG.

    PubMed

    Atzori, Manfredo; Muller, Henning; Baechler, Micheal

    2013-06-01

    Trans-radially amputated persons who own a myoelectric prosthesis have currently some control via surface electromyography (sEMG). However, the control systems are still limited (as they include very few movements) and not always natural (as the subject has to learn to associate movements of the muscles with the movements of the prosthesis). The Ninapro project tries helping the scientific community to overcome these limits through the creation of electromyography data sources to test machine learning algorithms. In this paper the results gained from first tests made on an amputated subject with the Ninapro acquisition protocol are detailed. In agreement with neurological studies on cortical plasticity and on the anatomy of the forearm, the amputee produced stable signals for each movement in the test. Using a k-NN classification algorithm, we obtain an average classification rate of 61.5% on all 53 movements. Successively, we simplify the task reducing the number of movements to 13, resulting in no misclassified movements. This shows that for fewer movements a very high classification accuracy is possible without the subject having to learn the movements specifically.

  12. Automated Analysis of Planktic Foraminifers Part III: Neural Network Classification

    NASA Astrophysics Data System (ADS)

    Schiebel, R.; Bollmann, J.; Quinn, P.; Vela, M.; Schmidt, D. N.; Thierstein, H. R.

    2003-04-01

    The abundance and assemblage composition of microplankton, together with the chemical and stable isotopic composition of their shells, are among the most successful methods in paleoceanography and paleoclimatology. However, the manual collection of statistically significant numbers of unbiased, reproducible data is time consuming. Consequently, automated microfossil analysis and species recognition has been a long-standing goal in micropaleontology. We have developed a Windows based software package COGNIS for the segmentation, preprocessing, and classification of automatically acquired microfossil images (see Part II, Bollmann et al., this volume), using operator designed neural network structures. With a five-layered convolutional neural network we obtain an average recognition rate of 75 % (max. 88 %) for 6 taxa (N. dutertrei, N. pachyderma dextral, N. pachyderma sinistral, G. inflata, G. menardii/tumida, O. universa), represented by 50 images each for 20 classes (separation of spiral and umbilical views, and of sinistral and dextral forms). Our investigation indicates that neural networks hold great potential for the automated classification of planktic foraminifers and offer new perspectives in micropaleontology, paleoceanography, and paleoclimatology (see Part I, Schmidt et al., this volume).

  13. Validation of a systems-actuarial computer process for multidimensional classification of child psychopathology.

    PubMed

    McDermott, P A; Hale, R L

    1982-07-01

    Tested diagnostic classifications of child psychopathology produced by a computerized technique known as multidimensional actuarial classification (MAC) against the criterion of expert psychological opinion. The MAC program applies series of statistical decision rules to assess the importance of and relationships among several dimensions of classification, i.e., intellectual functioning, academic achievement, adaptive behavior, and social and behavioral adjustment, to perform differential diagnosis of children's mental retardation, specific learning disabilities, behavioral and emotional disturbance, possible communication or perceptual-motor impairment, and academic under- and overachievement in reading and mathematics. Classifications rendered by MAC are compared to those offered by two expert child psychologists for cases of 73 children referred for psychological services. Experts' agreement with MAC was significant for all classification areas, as was MAC's agreement with the experts held as a conjoint reference standard. Whereas the experts' agreement with MAC averaged 86.0% above chance, their agreement with one another averaged 76.5% above chance. Implications of the findings are explored and potential advantages of the systems-actuarial approach are discussed.

  14. Classification of right-hand grasp movement based on EMOTIV Epoc+

    NASA Astrophysics Data System (ADS)

    Tobing, T. A. M. L.; Prawito, Wijaya, S. K.

    2017-07-01

    Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.

  15. Scoliosis curve type classification using kernel machine from 3D trunk image

    NASA Astrophysics Data System (ADS)

    Adankon, Mathias M.; Dansereau, Jean; Parent, Stefan; Labelle, Hubert; Cheriet, Farida

    2012-03-01

    Adolescent idiopathic scoliosis (AIS) is a deformity of the spine manifested by asymmetry and deformities of the external surface of the trunk. Classification of scoliosis deformities according to curve type is used to plan management of scoliosis patients. Currently, scoliosis curve type is determined based on X-ray exam. However, cumulative exposure to X-rays radiation significantly increases the risk for certain cancer. In this paper, we propose a robust system that can classify the scoliosis curve type from non invasive acquisition of 3D trunk surface of the patients. The 3D image of the trunk is divided into patches and local geometric descriptors characterizing the surface of the back are computed from each patch and forming the features. We perform the reduction of the dimensionality by using Principal Component Analysis and 53 components were retained. In this work a multi-class classifier is built with Least-squares support vector machine (LS-SVM) which is a kernel classifier. For this study, a new kernel was designed in order to achieve a robust classifier in comparison with polynomial and Gaussian kernel. The proposed system was validated using data of 103 patients with different scoliosis curve types diagnosed and classified by an orthopedic surgeon from the X-ray images. The average rate of successful classification was 93.3% with a better rate of prediction for the major thoracic and lumbar/thoracolumbar types.

  16. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  17. Authorship Discovery in Blogs Using Bayesian Classification with Corrective Scaling

    DTIC Science & Technology

    2008-06-01

    4 2.3 W. Fucks ’ Diagram of n-Syllable Word Frequencies . . . . . . . . . . . . . . 5 3.1 Confusion Matrix for All Test Documents of 500...of the books which scholars believed he had. • Wilhelm Fucks discriminated between authors using the average number of syllables per word and average...distance between equal-syllabled words [8]. Fucks , too, concluded that a study such as his reveals a “possibility of a quantitative classification

  18. Position Between Trunk and Pelvis During Gait Depending on the Gross Motor Function Classification System.

    PubMed

    Sanz-Mengibar, Jose Manuel; Altschuck, Natalie; Sanchez-de-Muniain, Paloma; Bauer, Christian; Santonja-Medina, Fernando

    2017-04-01

    To understand whether there is a trunk postural control threshold in the sagittal plane for the transition between the Gross Motor Function Classification System (GMFCS) levels measured with 3-dimensional gait analysis. Kinematics from 97 children with spastic bilateral cerebral palsy from spine angles according to Plug-In Gait model (Vicon) were plotted relative to their GMFCS level. Only average and minimum values of the lumbar spine segment correlated with GMFCS levels. Maximal values at loading response correlated independently with age at all functional levels. Average and minimum values were significant when analyzing age in combination with GMFCS level. There are specific postural control patterns in the average and minimum values for the position between trunk and pelvis in the sagittal plane during gait, for the transition among GMFCS I-III levels. Higher classifications of gross motor skills correlate with more extended spine angles.

  19. Classifications for Cesarean Section: A Systematic Review

    PubMed Central

    Torloni, Maria Regina; Betran, Ana Pilar; Souza, Joao Paulo; Widmer, Mariana; Allen, Tomas; Gulmezoglu, Metin; Merialdi, Mario

    2011-01-01

    Background Rising cesarean section (CS) rates are a major public health concern and cause worldwide debates. To propose and implement effective measures to reduce or increase CS rates where necessary requires an appropriate classification. Despite several existing CS classifications, there has not yet been a systematic review of these. This study aimed to 1) identify the main CS classifications used worldwide, 2) analyze advantages and deficiencies of each system. Methods and Findings Three electronic databases were searched for classifications published 1968–2008. Two reviewers independently assessed classifications using a form created based on items rated as important by international experts. Seven domains (ease, clarity, mutually exclusive categories, totally inclusive classification, prospective identification of categories, reproducibility, implementability) were assessed and graded. Classifications were tested in 12 hypothetical clinical case-scenarios. From a total of 2948 citations, 60 were selected for full-text evaluation and 27 classifications identified. Indications classifications present important limitations and their overall score ranged from 2–9 (maximum grade = 14). Degree of urgency classifications also had several drawbacks (overall scores 6–9). Woman-based classifications performed best (scores 5–14). Other types of classifications require data not routinely collected and may not be relevant in all settings (scores 3–8). Conclusions This review and critical appraisal of CS classifications is a methodologically sound contribution to establish the basis for the appropriate monitoring and rational use of CS. Results suggest that women-based classifications in general, and Robson's classification, in particular, would be in the best position to fulfill current international and local needs and that efforts to develop an internationally applicable CS classification would be most appropriately placed in building upon this classification. The use of a single CS classification will facilitate auditing, analyzing and comparing CS rates across different settings and help to create and implement effective strategies specifically targeted to optimize CS rates where necessary. PMID:21283801

  20. Anxiety-related visits to New Jersey emergency departments after September 11, 2001.

    PubMed

    Adinaro, David J; Allegra, John R; Cochrane, Dennis G; Cable, Gregory

    2008-04-01

    The purpose of this study was to examine the effect of September 11, 2001 on anxiety-related visits to selected Emergency Departments (EDs). We performed a retrospective analysis of consecutive patients seen by emergency physicians in 15 New Jersey EDs located within a 50-mile radius of the World Trade Center from July 11 through December 11 in each of 6 years, 1996--2001. We chose by consensus all ICD-9 (International Classification of Diseases, 9th revision) codes related to anxiety. We used graphical methods, Box-Jenkins modeling, and time series regression to determine the effect of September 11 to 14 on daily rates of anxiety-related visits. We found that the daily rate of anxiety-related visits just after September 11th was 93% higher (p < 0.0001) than the average for the remaining 150 days for 2001. This represents, on average, one additional daily visit for anxiety at each ED. We concluded that there was an increase in anxiety-related ED visits after September 11, 2001.

  1. Analysis of land cover change and its driving forces in a desert oasis landscape of southern Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Amuti, T.; Luo, G.

    2014-07-01

    The combined effects of drought, warming and the changes in land cover have caused severe land degradation for several decades in the extremely arid desert oases of Southern Xinjiang, Northwest China. This study examined land cover changes during 1990-2008 to characterize and quantify the transformations in the typical oasis of Hotan. Land cover classifications of these images were performed based on the supervised classification scheme integrated with conventional vegetation and soil indexes. Change-detection techniques in remote sensing (RS) and a geographic information system (GIS) were applied to quantify temporal and spatial dynamics of land cover changes. The overall accuracies, Kappa coefficients, and average annual increase rate or decrease rate of land cover classes were calculated to assess classification results and changing rate of land cover. The analysis revealed that major trends of the land cover changes were the notable growth of the oasis and the reduction of the desert-oasis ecotone, which led to accelerated soil salinization and plant deterioration within the oasis. These changes were mainly attributed to the intensified human activities. The results indicated that the newly created agricultural land along the margins of the Hotan oasis could result in more potential areas of land degradation. If no effective measures are taken against the deterioration of the oasis environment, soil erosion caused by land cover change may proceed. The trend of desert moving further inward and the shrinking of the ecotone may lead to potential risks to the eco-environment of the Hotan oasis over the next decades.

  2. Application of complex discrete wavelet transform in classification of Doppler signals using complex-valued artificial neural network.

    PubMed

    Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik

    2008-09-01

    In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.

  3. 34 CFR 222.68 - What tax rates does the Secretary use if two or more different classifications of real property...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... different classifications of real property are taxed at different rates? 222.68 Section 222.68 Education... different classifications of real property are taxed at different rates? If the real property of an LEA and its generally comparable LEAs consists of two or more classifications of real property taxed at...

  4. Measuring the relative extent of pulmonary infiltrates by hierarchical classification of patient-specific image features

    NASA Astrophysics Data System (ADS)

    Tsevas, S.; Iakovidis, D. K.

    2011-11-01

    Pulmonary infiltrates are common radiological findings indicating the filling of airspaces with fluid, inflammatory exudates, or cells. They are most common in cases of pneumonia, acute respiratory syndrome, atelectasis, pulmonary oedema and haemorrhage, whereas their extent is usually correlated with the extent or the severity of the underlying disease. In this paper we propose a novel pattern recognition framework for the measurement of the extent of pulmonary infiltrates in routine chest radiographs. The proposed framework follows a hierarchical approach to the assessment of image content. It includes the following: (a) sampling of the lung fields; (b) extraction of patient-specific grey-level histogram signatures from each sample; (c) classification of the extracted signatures into classes representing normal lung parenchyma and pulmonary infiltrates; (d) the samples for which the probability of belonging to one of the two classes does not reach an acceptable level are rejected and classified according to their textural content; (e) merging of the classification results of the two classification stages. The proposed framework has been evaluated on real radiographic images with pulmonary infiltrates caused by bacterial infections. The results show that accurate measurements of the infiltration areas can be obtained with respect to each lung field area. The average measurement error rate on the considered dataset reached 9.7% ± 1.0%.

  5. Audit and feedback using the Robson classification to reduce caesarean section rates: a systematic review.

    PubMed

    Boatin, A A; Cullinane, F; Torloni, M R; Betrán, A P

    2018-01-01

    In most regions worldwide, caesarean section (CS) rates are increasing. In these settings, new strategies are needed to reduce CS rates. To identify, critically appraise and synthesise studies using the Robson classification as a system to categorise and analyse data in clinical audit cycles to reduce CS rates. Medline, Embase, CINAHL and LILACS were searched from 2001 to 2016. Studies reporting use of the Robson classification to categorise and analyse data in clinical audit cycles to reduce CS rates. Data on study design, interventions used, CS rates, and perinatal outcomes were extracted. Of 385 citations, 30 were assessed for full text review and six studies, conducted in Brazil, Chile, Italy and Sweden, were included. All studies measured initial CS rates, provided feedback and monitored performance using the Robson classification. In two studies, the audit cycle consisted exclusively of feedback using the Robson classification; the other four used audit and feedback as part of a multifaceted intervention. Baseline CS rates ranged from 20 to 36.8%; after the intervention, CS rates ranged from 3.1 to 21.2%. No studies were randomised or controlled and all had a high risk of bias. We identified six studies using the Robson classification within clinical audit cycles to reduce CS rates. All six report reductions in CS rates; however, results should be interpreted with caution because of limited methodological quality. Future trials are needed to evaluate the role of the Robson classification within audit cycles aimed at reducing CS rates. Use of the Robson classification in clinical audit cycles to reduce caesarean rates. © 2017 The Authors. BJOG An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  6. Verification of the Accountability Method as a Means to Classify Radioactive Wastes Processed Using THOR Fluidized Bed Steam Reforming at the Studsvik Processing Facility in Erwin, Tennessee, USA - 13087

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olander, Jonathan; Myers, Corey

    2013-07-01

    Studsviks' Processing Facility Erwin (SPFE) has been treating Low-Level Radioactive Waste using its patented THOR process for over 13 years. Studsvik has been mixing and processing wastes of the same waste classification but different chemical and isotopic characteristics for the full extent of this period as a general matter of operations. Studsvik utilizes the accountability method to track the movement of radionuclides from acceptance of waste, through processing, and finally in the classification of waste for disposal. Recently the NRC has proposed to revise the 1995 Branch Technical Position on Concentration Averaging and Encapsulation (1995 BTP on CA) with additionalmore » clarification (draft BTP on CA). The draft BTP on CA has paved the way for large scale blending of higher activity and lower activity waste to produce a single waste for the purpose of classification. With the onset of blending in the waste treatment industry, there is concern from the public and state regulators as to the robustness of the accountability method and the ability of processors to prevent the inclusion of hot spots in waste. To address these concerns and verify the accountability method as applied by the SPFE, as well as the SPFE's ability to control waste package classification, testing of actual waste packages was performed. Testing consisted of a comprehensive dose rate survey of a container of processed waste. Separately, the waste package was modeled chemically and radiologically. Comparing the observed and theoretical data demonstrated that actual dose rates were lower than, but consistent with, modeled dose rates. Moreover, the distribution of radioactivity confirms that the SPFE can produce a radiologically homogeneous waste form. The results of the study demonstrate: 1) the accountability method as applied by the SPFE is valid and produces expected results; 2) the SPFE can produce a radiologically homogeneous waste; and 3) the SPFE can effectively control the waste package classification. (authors)« less

  7. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright

  8. Return to play after thigh muscle injury in elite football players: implementation and validation of the Munich muscle injury classification

    PubMed Central

    Ekstrand, Jan; Askling, Carl; Magnusson, Henrik; Mithoefer, Kai

    2013-01-01

    Background Owing to the complexity and heterogeneity of muscle injuries, a generally accepted classification system is still lacking. Aims To prospectively implement and validate a novel muscle injury classification and to evaluate its predictive value for return to professional football. Methods The recently described Munich muscle injury classification was prospectively evaluated in 31 European professional male football teams during the 2011/2012 season. Thigh muscle injury types were recorded by team medical staff and correlated to individual player exposure and resultant time-loss. Results In total, 393 thigh muscle injuries occurred. The muscle classification system was well received with a 100% response rate. Two-thirds of thigh muscle injuries were classified as structural and were associated with longer lay-off times compared to functional muscle disorders (p<0.001). Significant differences were observed between structural injury subgroups (minor partial, moderate partial and complete injuries) with increasing lay-off time associated with more severe structural injury. Median lay-off time of functional disorders was 5–8 days without significant differences between subgroups. There was no significant difference in the absence time between anterior and posterior thigh injuries. Conclusions The Munich muscle classification demonstrates a positive prognostic validity for return to play after thigh muscle injury in professional male football players. Structural injuries are associated with longer average lay-off times than functional muscle disorders. Subclassification of structural injuries correlates with return to play, while subgrouping of functional disorders shows less prognostic relevance. Functional disorders are often underestimated clinically and require further systematic study. PMID:23645834

  9. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    PubMed

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  10. Methods for Tier 1 Modeling within the Training Range Environmental Evaluation and Characterization System

    DTIC Science & Technology

    2009-08-01

    properties, part b. USLE K-Factor by Organic Matter Content Soil -Texture Classification Dry Bulk Density, g/cm3 Field Capacity, % Available...Universal Soil Loss Equation ( USLE ) can be used to estimate annual average sheet and rill erosion, A (tons/acre-yr), from the equation A R K L S...erodibility factors, K, for various soil classifications and percent organic matter content ( USLE Fact Sheet 2008). Textural Class Average Less than 2

  11. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on the rating applicable to the freight classification description published in the National Motor...

  12. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  13. Differential declines in syphilis-related mortality in the United States, 2000-2014.

    PubMed

    Barragan, Noel C; Moschetti, Kristin; Smith, Lisa V; Sorvillo, Frank; Kuo, Tony

    2017-04-01

    After reaching an all time low in 2000, the rate of syphilis in the United States has been steadily increasing. Parallel benchmarking of the disease's mortality burden has not been undertaken. Using ICD-10 classification, all syphilis-related deaths in the national Multiple Cause of Death dataset were examined for the period 2000-2014. Descriptive statistics and age-adjusted mortality rates were generated. Poisson regression was performed to analyze trends over time. A matched case-control analysis was conducted to assess the associations between syphilis-related deaths and comorbid conditions listed in the death records. A total of 1,829 deaths were attributed to syphilis; 32% (n = 593) identified syphilis as the underlying cause of death. Most decedents were men (60%) and either black (48%) or white (39%). Decedents aged ≥85 years had the highest average mortality rate (0.47 per 100,000 population; 95% confidence interval [CI], 0.42-0.52). For the sampled period, the average annual decline in mortality was -2.90% (95% CI, -3.93% to -1.87%). However, the average annual percent change varied across subgroups of interest. Declines in U.S. syphilis mortality suggest early detection and improved treatment access likely helped attenuate disease progression; however, increases in the disease rate since 2000 may be offsetting the impact of these advancements. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  14. Global warming and extinctions of endemic species from biodiversity hotspots.

    PubMed

    Malcolm, Jay R; Liu, Canran; Neilson, Ronald P; Hansen, Lara; Hannah, Lee

    2006-04-01

    Global warming is a key threat to biodiversity, but few researchers have assessed the magnitude of this threat at the global scale. We used major vegetation types (biomes) as proxies for natural habitats and, based on projected future biome distributions under doubled-CO2 climates, calculated changes in habitat areas and associated extinctions of endemic plant and vertebrate species in biodiversity hotspots. Because of numerous uncertainties in this approach, we undertook a sensitivity analysis of multiple factors that included (1) two global vegetation models, (2) different numbers of biome classes in our biome classification schemes, (3) different assumptions about whether species distributions were biome specific or not, and (4) different migration capabilities. Extinctions were calculated using both species-area and endemic-area relationships. In addition, average required migration rates were calculated for each hotspot assuming a doubled-CO2 climate in 100 years. Projected percent extinctions ranged from <1 to 43% of the endemic biota (average 11.6%), with biome specificity having the greatest influence on the estimates, followed by the global vegetation model and then by migration and biome classification assumptions. Bootstrap comparisons indicated that effects on hotpots as a group were not significantly different from effects on random same-biome collections of grid cells with respect to biome change or migration rates; in some scenarios, however, botspots exhibited relatively high biome change and low migration rates. Especially vulnerable hotspots were the Cape Floristic Region, Caribbean, Indo-Burma, Mediterranean Basin, Southwest Australia, and Tropical Andes, where plant extinctions per hotspot sometimes exceeded 2000 species. Under the assumption that projected habitat changes were attained in 100 years, estimated global-warming-induced rates of species extinctions in tropical hotspots in some cases exceeded those due to deforestation, supporting suggestions that global warming is one of the most serious threats to the planet's biodiversity.

  15. Invasive Cancer Incidence, 2004–2013, and Deaths, 2006–2015, in Nonmetropolitan and Metropolitan Counties — United States

    PubMed Central

    Anderson, Robert N.; Thomas, Cheryll C.; Massetti, Greta M.; Peaker, Brandy; Richardson, Lisa C.

    2017-01-01

    Problem/Condition Previous reports have shown that persons living in nonmetropolitan (rural or urban) areas in the United States have higher death rates from all cancers combined than persons living in metropolitan areas. Disparities might vary by cancer type and between occurrence and death from the disease. This report provides a comprehensive assessment of cancer incidence and deaths by cancer type in nonmetropolitan and metropolitan counties. Reporting Period 2004–2015. Description of System Cancer incidence data from CDC’s National Program of Cancer Registries and the National Cancer Institute’s Surveillance, Epidemiology, and End Results program were used to calculate average annual age-adjusted incidence rates for 2009–2013 and trends in annual age-adjusted incidence rates for 2004–2013. Cancer mortality data from the National Vital Statistics System were used to calculate average annual age-adjusted death rates for 2011–2015 and trends in annual age-adjusted death rates for 2006–2015. For 5-year average annual rates, counties were classified into four categories (nonmetropolitan rural, nonmetropolitan urban, metropolitan with population <1 million, and metropolitan with population ≥1 million). For the trend analysis, which used annual rates, these categories were combined into two categories (nonmetropolitan and metropolitan). Rates by county classification were examined by sex, age, race/ethnicity, U.S. census region, and cancer site. Trends in rates were examined by county classification and cancer site. Results During the most recent 5-year period for which data were available, nonmetropolitan rural areas had lower average annual age-adjusted cancer incidence rates for all anatomic cancer sites combined but higher death rates than metropolitan areas. During 2006–2015, the annual age-adjusted death rates for all cancer sites combined decreased at a slower pace in nonmetropolitan areas (-1.0% per year) than in metropolitan areas (-1.6% per year), increasing the differences in these rates. In contrast, annual age-adjusted incidence rates for all cancer sites combined decreased approximately 1% per year during 2004–2013 both in nonmetropolitan and metropolitan counties. Interpretation This report provides the first comprehensive description of cancer incidence and mortality in nonmetropolitan and metropolitan counties in the United States. Nonmetropolitan rural counties had higher incidence of and deaths from several cancers related to tobacco use and cancers that can be prevented by screening. Differences between nonmetropolitan and metropolitan counties in cancer incidence might reflect differences in risk factors such as cigarette smoking, obesity, and physical inactivity, whereas differences in cancer death rates might reflect disparities in access to health care and timely diagnosis and treatment. Public Health Action Many cancer cases and deaths could be prevented, and public health programs can use evidence-based strategies from the U.S. Preventive Services Task Force and Advisory Committee for Immunization Practices (ACIP) to support cancer prevention and control. The U.S. Preventive Services Task Force recommends population-based screening for colorectal, female breast, and cervical cancers among adults at average risk for these cancers and for lung cancer among adults at high risk; screening adults for tobacco use and excessive alcohol use, offering counseling and interventions as needed; and using low-dose aspirin to prevent colorectal cancer among adults considered to be at high risk for cardiovascular disease based on specific criteria. ACIP recommends vaccination against cancer-related infectious diseases including human papillomavirus and hepatitis B virus. The Guide to Community Preventive Services describes program and policy interventions proven to increase cancer screening and vaccination rates and to prevent tobacco use, excessive alcohol use, obesity, and physical inactivity. PMID:28683054

  16. Trends in Kindergarten Rates of Vaccine Exemption and State-Level Policy, 2011–2016

    PubMed Central

    Porter, Rachael M; Allen, Kristen; Salmon, Daniel A; Bednarczyk, Robert A

    2018-01-01

    Abstract Background Kindergarten-entry vaccination requirements have played an important role in controlling vaccine-preventable diseases in the United States. Forty-eight states and the District of Colombia offer nonmedical exemptions to vaccines, ranging in stringency. Methods We analyzed state-level exemption data from 2011 to 2012 through 2015 to 2016 school years. States were categorized by exemption ease and type of exemption allowed. We calculated nonmedical exemption rates for each year in the sample and stratified by exemption ease, type, and 2 trend categories: 2011–12 through 2012–13 and 2013–14 through 2015–16 school years. Using generalized estimating equations, we created regression models estimating (1) the average annual change in nonmedical exemption rates and (2) relative differences in rates by state classification. Results The nonmedical exemption rate was higher during the 2013–2014 through 2015–2016 period (2.25%) compared to 2011–2012 through 2012–2013 (1.75%); more importantly, the average annual change in the latter period plateaued. The nonmedical exemption rate in states allowing philosophical and religious exemptions was 2.41 times as high as in states allowing only religious exemptions (incidence rate ratio = 2.41; 95% confidence interval, 1.71–3.41). Conclusions There was an increase in nonmedical exemption rates through the 2012–2013 school year; however, rates stabilized through the 2015–2016 school year, showing an important shift in trend. PMID:29423420

  17. Tanner-Whitehouse Skeletal Ages in Male Youth Soccer Players: TW2 or TW3?

    PubMed

    Malina, Robert M; Coelho-E-Silva, Manuel J; Figueiredo, António J; Philippaerts, Renaat M; Hirose, Norikazu; Peña Reyes, Maria Eugenia; Gilli, Giulio; Benso, Andrea; Vaeyens, Roel; Deprez, Dieter; Guglielmo, Luiz F; Buranarugsa, Rojapon

    2018-04-01

    The Tanner-Whitehouse radius-ulna-short bone protocol (TW2 RUS) for the assessment of skeletal age (SA) is widely used to estimate the biological (skeletal) maturity status of children and adolescents. The scale for converting TW RUS ratings to an SA has been revised (TW3 RUS) and has implications for studies of youth athletes in age-group sports. The aim of this study was to compare TW2 and TW3 RUS SAs in an international sample of male youth soccer players and to compare distributions of players by maturity status defined by each SA protocol. SA assessments with the TW RUS method were collated for 1831 male soccer players aged 11-17 years from eight countries. RUS scores were converted to TW2 and TW3 SAs using the appropriate tables. SAs were related to chronological age (CA) in individual athletes and compared by CA groups. The difference of SA minus CA with TW2 SA and with TW3 SA was used to classify players as late, average, or early maturing with each method. Concordance of maturity classifications was evaluated with Cohen's Kappa coefficients. For the same RUS score, TW3 SAs were systematically and substantially reduced compared with TW2 SAs; mean differences by CA group ranged from - 0.97 to - 1.16 years. Kappa coefficients indicated at best fair concordance of TW2 and TW3 maturity classifications. Across the age range, 42% of players classified as average with TW2 SA were classified as late with TW3 SA, and 64% of players classified as early with TW2 SA were classified as average with TW3 SA. TW3 SAs were systematically lower than corresponding TW2 SAs in male youth soccer players. The differences between scales have major implications for the classification of players by maturity status, which is central to some talent development programs.

  18. Temporal comorbidity of mental disorder and ulcerative colitis.

    PubMed

    Cawthorpe, David; Davidson, Marta

    2015-01-01

    Ulcerative colitis is an inflammatory bowel disease that rarely exists in isolation in affected patients. We examined the association of ulcerative colitis and International Classification of Diseases mental disorder, as well as the temporal comorbidity of three broad International Classification of Diseases groupings of mental disorders in patients with ulcerative colitis to determine if mental disorder is more likely to occur before or after ulcerative colitis. We used physician diagnoses from the regional health zone of Calgary, Alberta, for patient visits from fiscal years 1994 to 2009 for treatment of any presenting concern in that Calgary health zone (763,449 patients) to identify 5113 patients age younger than 1 year to age 92 years (2120 males, average age = 47 years; 2993 females, average age = 48 years) with a diagnosis of ulcerative colitis. The 16-year cumulative prevalence of ulcerative colitis was 0.0058%, or 58 cases per 10,000 persons (95% confidence interval = 56-60 per 10,000). Although the cumulative prevalence of mental disorder in the overall sample was 5390 per 10,000 (53.9%), we found that 4192 patients with ulcerative colitis (82%) also had a diagnosis of a mental disorder. By annual rate of ulcerative colitis, patients with mental disorder had a significantly higher annual prevalence. The mental disorder grouping neuroses/depressive disorders was most likely to arise before ulcerative colitis (odds ratio = 1.87 for males; 2.24 for females). A temporal association was observed between specific groups of International Classification of Diseases mental disorder and ulcerative colitis, indicating a possible etiologic relationship between the disorders or their treatments, or both.

  19. A consensus view of fold space: Combining SCOP, CATH, and the Dali Domain Dictionary

    PubMed Central

    Day, Ryan; Beck, David A.C.; Armen, Roger S.; Daggett, Valerie

    2003-01-01

    We have determined consensus protein-fold classifications on the basis of three classification methods, SCOP, CATH, and Dali. These classifications make use of different methods of defining and categorizing protein folds that lead to different views of protein-fold space. Pairwise comparisons of domains on the basis of their fold classifications show that much of the disagreement between the classification systems is due to differing domain definitions rather than assigning the same domain to different folds. However, there are significant differences in the fold assignments between the three systems. These remaining differences can be explained primarily in terms of the breadth of the fold classifications. Many structures may be defined as having one fold in one system, whereas far fewer are defined as having the analogous fold in another system. By comparing these folds for a nonredundant set of proteins, the consensus method breaks up broad fold classifications and combines restrictive fold classifications into metafolds, creating, in effect, an averaged view of fold space. This averaged view requires that the structural similarities between proteins having the same metafold be recognized by multiple classification systems. Thus, the consensus map is useful for researchers looking for fold similarities that are relatively independent of the method used to compare proteins. The 30 most populated metafolds, representing the folds of about half of a nonredundant subset of the PDB, are presented here. The full list of metafolds is presented on the Web. PMID:14500873

  20. A consensus view of fold space: combining SCOP, CATH, and the Dali Domain Dictionary.

    PubMed

    Day, Ryan; Beck, David A C; Armen, Roger S; Daggett, Valerie

    2003-10-01

    We have determined consensus protein-fold classifications on the basis of three classification methods, SCOP, CATH, and Dali. These classifications make use of different methods of defining and categorizing protein folds that lead to different views of protein-fold space. Pairwise comparisons of domains on the basis of their fold classifications show that much of the disagreement between the classification systems is due to differing domain definitions rather than assigning the same domain to different folds. However, there are significant differences in the fold assignments between the three systems. These remaining differences can be explained primarily in terms of the breadth of the fold classifications. Many structures may be defined as having one fold in one system, whereas far fewer are defined as having the analogous fold in another system. By comparing these folds for a nonredundant set of proteins, the consensus method breaks up broad fold classifications and combines restrictive fold classifications into metafolds, creating, in effect, an averaged view of fold space. This averaged view requires that the structural similarities between proteins having the same metafold be recognized by multiple classification systems. Thus, the consensus map is useful for researchers looking for fold similarities that are relatively independent of the method used to compare proteins. The 30 most populated metafolds, representing the folds of about half of a nonredundant subset of the PDB, are presented here. The full list of metafolds is presented on the Web.

  1. 75 FR 68608 - Information Collection; Request for Authorization of Additional Classification and Rate, Standard...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Authorization of Additional Classification and Rate, Standard Form 1444 AGENCY: Department of Defense (DOD... of Additional Classification and Rate, Standard Form 1444. DATES: Comments may be submitted on or.../or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Mr. Ernest Woodson...

  2. Hand posture classification using electrocorticography signals in the gamma band over human sensorimotor brain areas

    NASA Astrophysics Data System (ADS)

    Chestek, Cynthia A.; Gilja, Vikash; Blabe, Christine H.; Foster, Brett L.; Shenoy, Krishna V.; Parvizi, Josef; Henderson, Jaimie M.

    2013-04-01

    Objective. Brain-machine interface systems translate recorded neural signals into command signals for assistive technology. In individuals with upper limb amputation or cervical spinal cord injury, the restoration of a useful hand grasp could significantly improve daily function. We sought to determine if electrocorticographic (ECoG) signals contain sufficient information to select among multiple hand postures for a prosthetic hand, orthotic, or functional electrical stimulation system.Approach. We recorded ECoG signals from subdural macro- and microelectrodes implanted in motor areas of three participants who were undergoing inpatient monitoring for diagnosis and treatment of intractable epilepsy. Participants performed five distinct isometric hand postures, as well as four distinct finger movements. Several control experiments were attempted in order to remove sensory information from the classification results. Online experiments were performed with two participants. Main results. Classification rates were 68%, 84% and 81% for correct identification of 5 isometric hand postures offline. Using 3 potential controls for removing sensory signals, error rates were approximately doubled on average (2.1×). A similar increase in errors (2.6×) was noted when the participant was asked to make simultaneous wrist movements along with the hand postures. In online experiments, fist versus rest was successfully classified on 97% of trials; the classification output drove a prosthetic hand. Online classification performance for a larger number of hand postures remained above chance, but substantially below offline performance. In addition, the long integration windows used would preclude the use of decoded signals for control of a BCI system. Significance. These results suggest that ECoG is a plausible source of command signals for prosthetic grasp selection. Overall, avenues remain for improvement through better electrode designs and placement, better participant training, and characterization of non-stationarities such that ECoG could be a viable signal source for grasp control for amputees or individuals with paralysis.

  3. Internal fixation of pilon fractures of the distal radius.

    PubMed Central

    Trumble, T. E.; Schmitt, S. R.; Vedder, N. B.

    1993-01-01

    When closed manipulation fails to restore articular congruity in comminuted, displaced fractures of the distal radius, open reduction and internal fixation is required. Results of surgical stabilization and articular reconstruction of these injuries are reviewed in this retrospective study of 49 patients with 52 displaced, intra-articular distal radius fractures. Forty-three patients (87%) with a mean age of 37 years (range of 17 to 79 years) were available for evaluation. The mean follow-up time was 38 months (range 22-69 months). When rated according to the Association for the Study of Internal Fixation (ASIF), 19 were type C2 and 21 were type C3. We devised an Injury Score System based on the initial injury radiographs to classify severely comminuted intra-articular fractures and to identify those associated with carpal injury (3 patients). Post-operative fracture alignment, articular congruity, and radial length were significantly improved following surgery (p < .01). Grip strength averaged 69% +/- 22% of the contralateral side, and the range of motion averaged 75% +/- 18% of the contralateral side post-operatively. A combined outcome rating system that included grip strength, range of motion, and pain relief averaged 76% +/- 19% of the contralateral side. There was a statistically significant decrease in the combined rating with more severe fracture patterns as defined by the ASIF system (p < .01), Malone classification (p < .03), and the Injury Score System (p < .001). The Injury Score System presented here, and in particular the number of fracture fragments, correlated most closely with outcome of all the classification systems studied. Operative treatment of these distal radius fractures with reconstruction of the articular congruity and correction of the articular surface alignment with internal fixation and/or external fixation, can significantly improve the radiographic alignment and functional outcome. Furthermore, the degree to which articular stepoff, gap between fragments, and radial shortening are improved by surgery is strongly correlated with improved outcome, even when the results are corrected for severity of initial injury, whereas correction of radial tilt or dorsal tilt did not correlate with improved outcome. Images Figure 2 PMID:8209554

  4. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.

  5. Low-back electromyography (EMG) data-driven load classification for dynamic lifting tasks.

    PubMed

    Totah, Deema; Ojeda, Lauro; Johnson, Daniel D; Gates, Deanna; Mower Provost, Emily; Barton, Kira

    2018-01-01

    Numerous devices have been designed to support the back during lifting tasks. To improve the utility of such devices, this research explores the use of preparatory muscle activity to classify muscle loading and initiate appropriate device activation. The goal of this study was to determine the earliest time window that enabled accurate load classification during a dynamic lifting task. Nine subjects performed thirty symmetrical lifts, split evenly across three weight conditions (no-weight, 10-lbs and 24-lbs), while low-back muscle activity data was collected. Seven descriptive statistics features were extracted from 100 ms windows of data. A multinomial logistic regression (MLR) classifier was trained and tested, employing leave-one subject out cross-validation, to classify lifted load values. Dimensionality reduction was achieved through feature cross-correlation analysis and greedy feedforward selection. The time of full load support by the subject was defined as load-onset. Regions of highest average classification accuracy started at 200 ms before until 200 ms after load-onset with average accuracies ranging from 80% (±10%) to 81% (±7%). The average recall for each class ranged from 69-92%. These inter-subject classification results indicate that preparatory muscle activity can be leveraged to identify the intent to lift a weight up to 100 ms prior to load-onset. The high accuracies shown indicate the potential to utilize intent classification for assistive device applications. Active assistive devices, e.g. exoskeletons, could prevent back injury by off-loading low-back muscles. Early intent classification allows more time for actuators to respond and integrate seamlessly with the user.

  6. 46 CFR 565.9 - Commission review, suspension and prohibition of rates, charges, classifications, rules or...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., charges, classifications, rules or regulations. 565.9 Section 565.9 Shipping FEDERAL MARITIME COMMISSION... Commission review, suspension and prohibition of rates, charges, classifications, rules or regulations. (a)(1..., charges, classifications, rules or regulations) from the Commission, each controlled carrier shall file a...

  7. 46 CFR 565.9 - Commission review, suspension and prohibition of rates, charges, classifications, rules or...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., charges, classifications, rules or regulations. 565.9 Section 565.9 Shipping FEDERAL MARITIME COMMISSION... Commission review, suspension and prohibition of rates, charges, classifications, rules or regulations. (a)(1..., charges, classifications, rules or regulations) from the Commission, each controlled carrier shall file a...

  8. 78 FR 18252 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System... 2007 North American Industry Classification System (NAICS) codes currently used in Federal Wage System... (OPM) issued a final rule (73 FR 45853) to update the 2002 North American Industry Classification...

  9. 76 FR 53699 - Labor Surplus Area Classification Under Executive Orders 12073 and 10582

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-29

    ... DEPARTMENT OF LABOR Employment and Training Administration Labor Surplus Area Classification Under... estimates provided to ETA by the Bureau of Labor Statistics are used in making these classifications. The... classification criteria include a ``floor unemployment rate'' (6.0%) and a ``ceiling unemployment rate'' (10.0...

  10. Response rate, response time, and economic costs of survey research: A randomized trial of practicing pharmacists.

    PubMed

    Hardigan, Patrick C; Popovici, Ioana; Carvajal, Manuel J

    2016-01-01

    There is a gap between increasing demands from pharmacy journals, publishers, and reviewers for high survey response rates and the actual responses often obtained in the field by survey researchers. Presumably demands have been set high because response rates, times, and costs affect the validity and reliability of survey results. Explore the extent to which survey response rates, average response times, and economic costs are affected by conditions under which pharmacist workforce surveys are administered. A random sample of 7200 U.S. practicing pharmacists was selected. The sample was stratified by delivery method, questionnaire length, item placement, and gender of respondent for a total of 300 observations within each subgroup. A job satisfaction survey was administered during March-April 2012. Delivery method was the only classification showing significant differences in response rates and average response times. The postal mail procedure accounted for the highest response rates of completed surveys, but the email method exhibited the quickest turnaround. A hybrid approach, consisting of a combination of postal and electronic means, showed the least favorable results. Postal mail was 2.9 times more cost effective than the email approach and 4.6 times more cost effective than the hybrid approach. Researchers seeking to increase practicing pharmacists' survey participation and reduce response time and related costs can benefit from the analytical procedures tested here. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The Apgar score has survived the test of time.

    PubMed

    Finster, Mieczyslaw; Wood, Margaret

    2005-04-01

    In 1953, Virginia Apgar, M.D. published her proposal for a new method of evaluation of the newborn infant. The avowed purpose of this paper was to establish a simple and clear classification of newborn infants which can be used to compare the results of obstetric practices, types of maternal pain relief and the results of resuscitation. Having considered several objective signs pertaining to the condition of the infant at birth she selected five that could be evaluated and taught to the delivery room personnel without difficulty. These signs were heart rate, respiratory effort, reflex irritability, muscle tone and color. Sixty seconds after the complete birth of the baby a rating of zero, one or two was given to each sign, depending on whether it was absent or present. Virginia Apgar reviewed anesthesia records of 1025 infants born alive at Columbia Presbyterian Medical Center during the period of this report. All had been rated by her method. Infants in poor condition scored 0-2, infants in fair condition scored 3-7, while scores 8-10 were achieved by infants in good condition. The most favorable score 1 min after birth was obtained by infants delivered vaginally with the occiput the presenting part (average 8.4). Newborns delivered by version and breech extraction had the lowest score (average 6.3). Infants delivered by cesarean section were more vigorous (average score 8.0) when spinal was the method of anesthesia versus an average score of 5.0 when general anesthesia was used. Correlating the 60 s score with neonatal mortality, Virginia found that mature infants receiving 0, 1 or 2 scores had a neonatal death rate of 14%; those scoring 3, 4, 5, 6 or 7 had a death rate of 1.1%; and those in the 8-10 score group had a death rate of 0.13%. She concluded that the prognosis of an infant is excellent if he receives one of the upper three scores, and poor if one of the lowest three scores.

  12. Atraumatic intubation: experience using a 5.0 endotracheal tube without a stylet for laryngeal surgery.

    PubMed

    Moore, Jaime E; Hu, Amanda; Rutt, Amy; Green, Parmis; Hawkshaw, Mary; Sataloff, Robert T

    2015-02-01

    Vocal fold injury is a well-know complication of intubation, with rates reported as high as 69%. Laryngology textbooks recommend the use of a small endotracheal tube (ETT) to help avoid these complications and optimize visualization. Case reports have suggested that the rigid stylet can lead to laryngeal injury. Given the additional risks, intubation without the stylet is our preferred practice. There is limited documentation in the literature regarding this viewpoint. Our study investigated the feasibility of and potential barriers to intubation using 5.0 ETT without a stylet. Prospective study. Consecutive adult patients undergoing laryngeal surgery were recruited for intubation with a 5.0 ETT without a stylet. Demographic data, specialty and training level of the intubator, and factors that would predict a difficult intubation were recorded. Descriptive statistical analysis was performed. Findings of the participants (n = 67) included average American Society of Anesthesiologists (ASA) physical status classification (2.2), average Mallampati score (1.7), average Cormack-Lehane grade (1.5), and average body mass index (28.0). Five patients (7.4%) required intubation using a stylet, and one of these five participants was intubated initially with a stylet. Of these five participants, 80% required use of a GlideScope (P < .001), and they had significantly higher ASA classification (P = .047) and number of intubation attempts (P = .042). One patient sustained an oropharyngeal injury during intubation with a stylet. No participants had laryngeal injury. Most patients can be intubated successfully using a 5.0 ETT without a stylet. There were no cases of laryngeal trauma with this technique. 2b. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  13. 76 FR 5375 - Submission for OMB Review; Request for Authorization of Additional Classification and Rate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-31

    ... for Authorization of Additional Classification and Rate, Standard Form 1444 AGENCIES: Department of... Request for Authorization of Additional Classification and Rate, Standard Form 1444. A notice published in... personal and/or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Ms. Clare...

  14. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  15. 12 CFR 702.105 - Weighted-average life of investments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...

  16. 12 CFR 702.105 - Weighted-average life of investments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...

  17. 12 CFR 702.105 - Weighted-average life of investments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...

  18. 12 CFR 702.105 - Weighted-average life of investments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...

  19. 12 CFR 702.105 - Weighted-average life of investments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...

  20. A clinical decision-making mechanism for context-aware and patient-specific remote monitoring systems using the correlations of multiple vital signs.

    PubMed

    Forkan, Abdur Rahim Mohammad; Khalil, Ibrahim

    2017-02-01

    In home-based context-aware monitoring patient's real-time data of multiple vital signs (e.g. heart rate, blood pressure) are continuously generated from wearable sensors. The changes in such vital parameters are highly correlated. They are also patient-centric and can be either recurrent or can fluctuate. The objective of this study is to develop an intelligent method for personalized monitoring and clinical decision support through early estimation of patient-specific vital sign values, and prediction of anomalies using the interrelation among multiple vital signs. In this paper, multi-label classification algorithms are applied in classifier design to forecast these values and related abnormalities. We proposed a completely new approach of patient-specific vital sign prediction system using their correlations. The developed technique can guide healthcare professionals to make accurate clinical decisions. Moreover, our model can support many patients with various clinical conditions concurrently by utilizing the power of cloud computing technology. The developed method also reduces the rate of false predictions in remote monitoring centres. In the experimental settings, the statistical features and correlations of six vital signs are formulated as multi-label classification problem. Eight multi-label classification algorithms along with three fundamental machine learning algorithms are used and tested on a public dataset of 85 patients. Different multi-label classification evaluation measures such as Hamming score, F1-micro average, and accuracy are used for interpreting the prediction performance of patient-specific situation classifications. We achieved 90-95% Hamming score values across 24 classifier combinations for 85 different patients used in our experiment. The results are compared with single-label classifiers and without considering the correlations among the vitals. The comparisons show that multi-label method is the best technique for this problem domain. The evaluation results reveal that multi-label classification techniques using the correlations among multiple vitals are effective ways for early estimation of future values of those vitals. In context-aware remote monitoring this process can greatly help the doctors in quick diagnostic decision making. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. An automated Pearson's correlation change classification (APC3) approach for GC/MS metabonomic data using total ion chromatograms (TICs).

    PubMed

    Prakash, Bhaskaran David; Esuvaranathan, Kesavan; Ho, Paul C; Pasikanti, Kishore Kumar; Chan, Eric Chun Yong; Yap, Chun Wei

    2013-05-21

    A fully automated and computationally efficient Pearson's correlation change classification (APC3) approach is proposed and shown to have overall comparable performance with both an average accuracy and an average AUC of 0.89 ± 0.08 but is 3.9 to 7 times faster, easier to use and have low outlier susceptibility in contrast to other dimensional reduction and classification combinations using only the total ion chromatogram (TIC) intensities of GC/MS data. The use of only the TIC permits the possible application of APC3 to other metabonomic data such as LC/MS TICs or NMR spectra. A RapidMiner implementation is available for download at http://padel.nus.edu.sg/software/padelapc3.

  2. Sleep stage classification by non-contact vital signs indices using Doppler radar sensors.

    PubMed

    Kagawa, Masayuki; Suzumura, Kazuki; Matsui, Takemi

    2016-08-01

    Disturbed sleep has become more common in recent years. To improve the quality of sleep, undergoing sleep observation has gained interest as a means to resolve possible problems. In this paper, we evaluate a non-restrictive and non-contact method for classifying real-time sleep stages and report on its potential applications. The proposed system measures heart rate (HR), heart rate variability (HRV), body movements, and respiratory signals of a sleeping person using two 24-GHz microwave radars placed beneath the mattress. We introduce a method that dynamically selects the window width of the moving average filter to extract the pulse waves from the radar output signals. The Pearson correlation coefficient between two HR measurements derived from the radars overnight, and the reference polysomnography was the average of 88.3% and the correlation coefficient for HRV parameters was the average of 71.2%. For identifying wake and sleep periods, the body-movement index reached sensitivity of 76.0%, and a specificity of 77.0% with 10 participants. Low-frequency (LF) components of HRV and the LF/HF ratio had a high degree of contribution and differed significantly across the three sleep stages (REM, LIGHT, and DEEP; p <; 0.01). In contrast, high-frequency (HF) components of HRV were not significantly different across the three sleep stages (p > 0.05). We applied a canonical discriminant analysis to identify wake or sleep periods and to classify the three sleep stages with leave-one-out cross validation. Classification accuracy was 66.4% for simply identifying wake and sleep, 57.1% for three stages (wake, REM, and NREM) and 34% for four stages (wake, REM, LIGHT, and DEEP). This is a novel system for measuring HRs, HRV, body movements, and respiratory intervals and for measuring high sensitivity pulse waves using two radar signals. It simplifies measurement of sleep stages and may be employed at nursing care facilities or by the general public to improve sleep quality.

  3. Sources of error in estimating truck traffic from automatic vehicle classification data

    DOT National Transportation Integrated Search

    1998-10-01

    Truck annual average daily traffic estimation errors resulting from sample classification counts are computed in this paper under two scenarios. One scenario investigates an improper factoring procedure that may be used by highway agencies. The study...

  4. C-fuzzy variable-branch decision tree with storage and classification error rate constraints

    NASA Astrophysics Data System (ADS)

    Yang, Shiueng-Bien

    2009-10-01

    The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.

  5. Drawing a baseline in aesthetic quality assessment

    NASA Astrophysics Data System (ADS)

    Rubio, Fernando; Flores, M. Julia; Puerta, Jose M.

    2018-04-01

    Aesthetic classification of images is an inherently subjective task. There does not exist a validated collection of images/photographs labeled as having good or bad quality from experts. Nowadays, the closest approximation to that is to use databases of photos where a group of users rate each image. Hence, there is not a unique good/bad label but a rating distribution given by users voting. Due to this peculiarity, it is not possible to state the problem of binary aesthetic supervised classification in such a direct mode as other Computer Vision tasks. Recent literature follows an approach where researchers utilize the average rates from the users for each image, and they establish an arbitrary threshold to determine their class or label. In this way, images above the threshold are considered of good quality, while images below the threshold are seen as bad quality. This paper analyzes current literature, and it reviews those attributes able to represent an image, differentiating into three families: specific, general and deep features. Among those which have been proved more competitive, we have selected a representative subset, being our main goal to establish a clear experimental framework. Finally, once features were selected, we have used them for the full AVA dataset. We have to remark that to perform validation we report not only accuracy values, which is not that informative in this case, but also, metrics able to evaluate classification power within imbalanced datasets. We have conducted a series of experiments so that distinct well-known classifiers are learned from data. Like that, this paper provides what we could consider valuable and valid baseline results for the given problem.

  6. Spatial Trends and Variability of Vertical Accretion Rates in the Barataria Basin, Louisiana, U.S.A. using Pb-210 and Cs-137 radiochemistry

    NASA Astrophysics Data System (ADS)

    Shrull, S.; Wilson, C.; Snedden, G.; Bentley, S. J.

    2017-12-01

    Barataria Basin on the south Louisiana coast is experiencing some of the greatest amounts of coastal land loss in the United States with rates as high as 23.1 km2 lost per year. In an attempt to help slow or reverse land loss, millions of dollars are being spent to create sediment diversions to increase the amount of available inorganic sediments to these vulnerable coastal marsh areas. A better understanding of the spatial trends and patterns of background accretion rates needs to be established in order to effectively implement such structures. Core samples from 25 Coastwide Reference Monitoring System (CRMS) sites spanning inland freshwater to coastal saline areas within the basin were extracted, and using vertical accretion rates from Cs-137 & Pb-210 radionuclide detection, mineral versus organic sediment composition, grain size distribution, and spatial trends of bulk densities, the controls on the accretion rates of the marsh soils will be constrained. Initial rates show a range from 0.31 cm/year to 1.02 cm/year with the average being 0.79 cm/year. Preliminary results suggest that location and proximity to an inorganic sediment source (i.e. river/tributary or open water) have a stronger influence on vertical accretion rates than marsh classification and salinity, with no clear relationship between vertical accretion and salinity. Down-core sediment composition and bulk density analyses observed at a number of the sites likely suggest episodic sedimentation and show different vertical accretion rates through time. Frequency and length of inundation (i.e. hydroperiod), and land/marsh classification from the CRMS data set will be further investigated to constrain the spatial variability in vertical accretion for the basin.

  7. A comparative study of breast cancer diagnosis based on neural network ensemble via improved training algorithms.

    PubMed

    Azami, Hamed; Escudero, Javier

    2015-08-01

    Breast cancer is one of the most common types of cancer in women all over the world. Early diagnosis of this kind of cancer can significantly increase the chances of long-term survival. Since diagnosis of breast cancer is a complex problem, neural network (NN) approaches have been used as a promising solution. Considering the low speed of the back-propagation (BP) algorithm to train a feed-forward NN, we consider a number of improved NN trainings for the Wisconsin breast cancer dataset: BP with momentum, BP with adaptive learning rate, BP with adaptive learning rate and momentum, Polak-Ribikre conjugate gradient algorithm (CGA), Fletcher-Reeves CGA, Powell-Beale CGA, scaled CGA, resilient BP (RBP), one-step secant and quasi-Newton methods. An NN ensemble, which is a learning paradigm to combine a number of NN outputs, is used to improve the accuracy of the classification task. Results demonstrate that NN ensemble-based classification methods have better performance than NN-based algorithms. The highest overall average accuracy is 97.68% obtained by NN ensemble trained by RBP for 50%-50% training-test evaluation method.

  8. Integrating Human and Machine Intelligence in Galaxy Morphology Classification Tasks

    NASA Astrophysics Data System (ADS)

    Beck, Melanie Renee

    The large flood of data flowing from observatories presents significant challenges to astronomy and cosmology--challenges that will only be magnified by projects currently under development. Growth in both volume and velocity of astrophysics data is accelerating: whereas the Sloan Digital Sky Survey (SDSS) has produced 60 terabytes of data in the last decade, the upcoming Large Synoptic Survey Telescope (LSST) plans to register 30 terabytes per night starting in the year 2020. Additionally, the Euclid Mission will acquire imaging for 5 x 107 resolvable galaxies. The field of galaxy evolution faces a particularly challenging future as complete understanding often cannot be reached without analysis of detailed morphological galaxy features. Historically, morphological analysis has relied on visual classification by astronomers, accessing the human brains capacity for advanced pattern recognition. However, this accurate but inefficient method falters when confronted with many thousands (or millions) of images. In the SDSS era, efforts to automate morphological classifications of galaxies (e.g., Conselice et al., 2000; Lotz et al., 2004) are reasonably successful and can distinguish between elliptical and disk-dominated galaxies with accuracies of 80%. While this is statistically very useful, a key problem with these methods is that they often cannot say which 80% of their samples are accurate. Furthermore, when confronted with the more complex task of identifying key substructure within galaxies, automated classification algorithms begin to fail. The Galaxy Zoo project uses a highly innovative approach to solving the scalability problem of visual classification. Displaying images of SDSS galaxies to volunteers via a simple and engaging web interface, www.galaxyzoo.org asks people to classify images by eye. Within the first year hundreds of thousands of members of the general public had classified each of the 1 million SDSS galaxies an average of 40 times. Galaxy Zoo thus solved both the visual classification problem of time efficiency and improved accuracy by producing a distribution of independent classifications for each galaxy. While crowd-sourced galaxy classifications have proven their worth, challenges remain before establishing this method as a critical and standard component of the data processing pipelines for the next generation of surveys. In particular, though innovative, crowd-sourcing techniques do not have the capacity to handle the data volume and rates expected in the next generation of surveys. These algorithms will be delegated to handle the majority of the classification tasks, freeing citizen scientists to contribute their efforts on subtler and more complex assignments. This thesis presents a solution through an integration of visual and automated classifications, preserving the best features of both human and machine. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 (GZ2) project. We reprocess the top-level question of the GZ2 decision tree with a Bayesian classification aggregation algorithm dubbed SWAP, originally developed for the Space Warps gravitational lens project. Through a simple binary classification scheme we increase the classification rate nearly 5-fold classifying 226,124 galaxies in 92 days of GZ2 project time while reproducing labels derived from GZ2 classification data with 95.7% accuracy. We next combine this with a Random Forest machine learning algorithm that learns on a suite of non-parametric morphology indicators widely used for automated morphologies. We develop a decision engine that delegates tasks between human and machine and demonstrate that the combined system provides a factor of 11.4 increase in the classification rate, classifying 210,803 galaxies in just 32 days of GZ2 project time with 93.1% accuracy. As the Random Forest algorithm requires a minimal amount of computational cost, this result has important implications for galaxy morphology identification tasks in the era of Euclid and other large-scale surveys.

  9. Classification of DNA nucleotides with transverse tunneling currents

    NASA Astrophysics Data System (ADS)

    Nyvold Pedersen, Jonas; Boynton, Paul; Di Ventra, Massimiliano; Jauho, Antti-Pekka; Flyvbjerg, Henrik

    2017-01-01

    It has been theoretically suggested and experimentally demonstrated that fast and low-cost sequencing of DNA, RNA, and peptide molecules might be achieved by passing such molecules between electrodes embedded in a nanochannel. The experimental realization of this scheme faces major challenges, however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing, though current fluctuations due to Brownian motion of the molecule average out during the required integration time. Moreover, data acquisition equipment introduces noise, and electronic filters create correlations in time-series data. We discuss how these effects must be included in the analysis of, e.g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates is straightforward. The ideas presented here can be extended to other sequencing approaches of current interest.

  10. Regional Landslide Mapping Aided by Automated Classification of SqueeSAR™ Time Series (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.

    2013-12-01

    Space borne InSAR has proven to be very valuable for landslides detection. In particular, extremely slow landslides (Cruden and Varnes, 1996) can be now clearly identified, thanks to the millimetric precision reached by recent multi-interferometric algorithms. The typical approach in radar interpretation for landslides mapping is based on average annual velocity of the deformation which is calculated over the entire times series. The Hotspot and Cluster Analysis (Lu et al., 2012) and the PSI-based matrix approach (Cigna et al., 2013) are examples of landslides mapping techniques based on average annual velocities. However, slope movements can be affected by non-linear deformation trends, (i.e. reactivation of dormant landslides, deceleration due to natural or man-made slope stabilization, seasonal activity, etc). Therefore, analyzing deformation time series is crucial in order to fully characterize slope dynamics. While this is relatively simple to be carried out manually when dealing with small dataset, the time series analysis over regional scale dataset requires automated classification procedures. Berti et al. (2013) developed an automatic procedure for the analysis of InSAR time series based on a sequence of statistical tests. The analysis allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) which are likely to represent different slope processes. The analysis also provides a series of descriptive parameters which can be used to characterize the temporal changes of ground motion. All the classification algorithms were integrated into a Graphical User Interface called PSTime. We investigated an area of about 2000 km2 in the Northern Apennines of Italy by using SqueeSAR™ algorithm (Ferretti et al., 2011). Two Radarsat-1 data stack, comprising of 112 scenes in descending orbit and 124 scenes in ascending orbit, were processed. The time coverage lasts from April 2003 to November 2012, with an average temporal frequency of 1 scene/month. Radar interpretation has been carried out by considering average annual velocities as well as acceleration/deceleration trends evidenced by PSTime. Altogether, from ascending and descending geometries respectively, this approach allowed detecting of 115 and 112 potential landslides on the basis of average displacement rate and 77 and 79 landslides on the basis of acceleration trends. In conclusion, time series analysis resulted to be very valuable for landslide mapping. In particular it highlighted areas with marked acceleration in a specific period in time while still being affected by low average annual velocity over the entire analysis period. On the other hand, even in areas with high average annual velocity, time series analysis was of primary importance to characterize the slope dynamics in terms of acceleration events.

  11. Development of classification models to detect Salmonella Enteritidis and Salmonella Typhimurium found in poultry carcass rinses by visible-near infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Seo, Young Wook; Yoon, Seung Chul; Park, Bosoon; Hinton, Arthur; Windham, William R.; Lawrence, Kurt C.

    2013-05-01

    Salmonella is a major cause of foodborne disease outbreaks resulting from the consumption of contaminated food products in the United States. This paper reports the development of a hyperspectral imaging technique for detecting and differentiating two of the most common Salmonella serotypes, Salmonella Enteritidis (SE) and Salmonella Typhimurium (ST), from background microflora that are often found in poultry carcass rinse. Presumptive positive screening of colonies with a traditional direct plating method is a labor intensive and time consuming task. Thus, this paper is concerned with the detection of differences in spectral characteristics among the pure SE, ST, and background microflora grown on brilliant green sulfa (BGS) and xylose lysine tergitol 4 (XLT4) agar media with a spread plating technique. Visible near-infrared hyperspectral imaging, providing the spectral and spatial information unique to each microorganism, was utilized to differentiate SE and ST from the background microflora. A total of 10 classification models, including five machine learning algorithms, each without and with principal component analysis (PCA), were validated and compared to find the best model in classification accuracy. The five machine learning (classification) algorithms used in this study were Mahalanobis distance (MD), k-nearest neighbor (kNN), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM). The average classification accuracy of all 10 models on a calibration (or training) set of the pure cultures on BGS agar plates was 98% (Kappa coefficient = 0.95) in determining the presence of SE and/or ST although it was difficult to differentiate between SE and ST. The average classification accuracy of all 10 models on a training set for ST detection on XLT4 agar was over 99% (Kappa coefficient = 0.99) although SE colonies on XLT4 agar were difficult to differentiate from background microflora. The average classification accuracy of all 10 models on a validation set of chicken carcass rinses spiked with SE or ST and incubated on BGS agar plates was 94.45% and 83.73%, without and with PCA for classification, respectively. The best performing classification model on the validation set was QDA without PCA by achieving the classification accuracy of 98.65% (Kappa coefficient=0.98). The overall best performing classification model regardless of using PCA was MD with the classification accuracy of 94.84% (Kappa coefficient=0.88) on the validation set.

  12. Low-back electromyography (EMG) data-driven load classification for dynamic lifting tasks

    PubMed Central

    Ojeda, Lauro; Johnson, Daniel D.; Gates, Deanna; Mower Provost, Emily; Barton, Kira

    2018-01-01

    Objective Numerous devices have been designed to support the back during lifting tasks. To improve the utility of such devices, this research explores the use of preparatory muscle activity to classify muscle loading and initiate appropriate device activation. The goal of this study was to determine the earliest time window that enabled accurate load classification during a dynamic lifting task. Methods Nine subjects performed thirty symmetrical lifts, split evenly across three weight conditions (no-weight, 10-lbs and 24-lbs), while low-back muscle activity data was collected. Seven descriptive statistics features were extracted from 100 ms windows of data. A multinomial logistic regression (MLR) classifier was trained and tested, employing leave-one subject out cross-validation, to classify lifted load values. Dimensionality reduction was achieved through feature cross-correlation analysis and greedy feedforward selection. The time of full load support by the subject was defined as load-onset. Results Regions of highest average classification accuracy started at 200 ms before until 200 ms after load-onset with average accuracies ranging from 80% (±10%) to 81% (±7%). The average recall for each class ranged from 69–92%. Conclusion These inter-subject classification results indicate that preparatory muscle activity can be leveraged to identify the intent to lift a weight up to 100 ms prior to load-onset. The high accuracies shown indicate the potential to utilize intent classification for assistive device applications. Significance Active assistive devices, e.g. exoskeletons, could prevent back injury by off-loading low-back muscles. Early intent classification allows more time for actuators to respond and integrate seamlessly with the user. PMID:29447252

  13. Texture-based segmentation of temperate-zone woodland in panchromatic IKONOS imagery

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Bugnet, Pierre; Cavayas, Francois

    2003-08-01

    We have performed a study to identify optimal texture parameters for woodland segmentation in a highly non-homogeneous urban area from a temperate-zone panchromatic IKONOS image. Texture images are produced with the sum- and difference-histograms depend on two parameters: window size f and displacement step p. The four texture features yielding the best discrimination between classes are the mean, contrast, correlation and standard deviation. The f-p combinations 17-1, 17-2, 35-1 and 35-2 are those which give the best performance, with an average classification rate of 90%.

  14. Automatic identification of individual killer whales.

    PubMed

    Brown, Judith C; Smaragdis, Paris; Nousek-McGregor, Anna

    2010-09-01

    Following the successful use of HMM and GMM models for classification of a set of 75 calls of northern resident killer whales into call types [Brown, J. C., and Smaragdis, P., J. Acoust. Soc. Am. 125, 221-224 (2009)], the use of these same methods has been explored for the identification of vocalizations from the same call type N2 of four individual killer whales. With an average of 20 vocalizations from each of the individuals the pairwise comparisons have an extremely high success rate of 80 to 100% and the identifications within the entire group yield around 78%.

  15. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2014-05-01

    lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals

  16. A Novel Energy-Efficient Approach for Human Activity Recognition.

    PubMed

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru

    2017-09-08

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.

  17. Phonologically-based biomarkers for major depressive disorder

    NASA Astrophysics Data System (ADS)

    Trevino, Andrea Carolina; Quatieri, Thomas Francis; Malyska, Nicolas

    2011-12-01

    Of increasing importance in the civilian and military population is the recognition of major depressive disorder at its earliest stages and intervention before the onset of severe symptoms. Toward the goal of more effective monitoring of depression severity, we introduce vocal biomarkers that are derived automatically from phonologically-based measures of speech rate. To assess our measures, we use a 35-speaker free-response speech database of subjects treated for depression over a 6-week duration. We find that dissecting average measures of speech rate into phone-specific characteristics and, in particular, combined phone-duration measures uncovers stronger relationships between speech rate and depression severity than global measures previously reported for a speech-rate biomarker. Results of this study are supported by correlation of our measures with depression severity and classification of depression state with these vocal measures. Our approach provides a general framework for analyzing individual symptom categories through phonological units, and supports the premise that speaking rate can be an indicator of psychomotor retardation severity.

  18. Learning classification trees

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1991-01-01

    Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. How a tree learning algorithm can be derived from Bayesian decision theory is outlined. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule turns out to be similar to Quinlan's information gain splitting rule, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 and Breiman et al. Cart show the full Bayesian algorithm is consistently as good, or more accurate than these other approaches though at a computational price.

  19. Spectral-spatial classification using tensor modeling for cancer detection with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Lu, Guolan; Halig, Luma; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2014-03-01

    As an emerging technology, hyperspectral imaging (HSI) combines both the chemical specificity of spectroscopy and the spatial resolution of imaging, which may provide a non-invasive tool for cancer detection and diagnosis. Early detection of malignant lesions could improve both survival and quality of life of cancer patients. In this paper, we introduce a tensor-based computation and modeling framework for the analysis of hyperspectral images to detect head and neck cancer. The proposed classification method can distinguish between malignant tissue and healthy tissue with an average sensitivity of 96.97% and an average specificity of 91.42% in tumor-bearing mice. The hyperspectral imaging and classification technology has been demonstrated in animal models and can have many potential applications in cancer research and management.

  20. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network

    PubMed Central

    Adak, M. Fatih; Yumusak, Nejat

    2016-01-01

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data. PMID:26927124

  1. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network.

    PubMed

    Adak, M Fatih; Yumusak, Nejat

    2016-02-27

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  2. Combining Passive Microwave Rain Rate Retrieval with Visible and Infrared Cloud Classification.

    NASA Astrophysics Data System (ADS)

    Miller, Shawn William

    The relation between cloud type and rain rate has been investigated here from different approaches. Previous studies and intercomparisons have indicated that no single passive microwave rain rate algorithm is an optimal choice for all types of precipitating systems. Motivated by the upcoming Tropical Rainfall Measuring Mission (TRMM), an algorithm which combines visible and infrared cloud classification with passive microwave rain rate estimation was developed and analyzed in a preliminary manner using data from the Tropical Ocean Global Atmosphere-Coupled Ocean Atmosphere Response Experiment (TOGA-COARE). Overall correlation with radar rain rate measurements across five case studies showed substantial improvement in the combined algorithm approach when compared to the use of any single microwave algorithm. An automated neural network cloud classifier for use over both land and ocean was independently developed and tested on Advanced Very High Resolution Radiometer (AVHRR) data. The global classifier achieved strict accuracy for 82% of the test samples, while a more localized version achieved strict accuracy for 89% of its own test set. These numbers provide hope for the eventual development of a global automated cloud classifier for use throughout the tropics and the temperate zones. The localized classifier was used in conjunction with gridded 15-minute averaged radar rain rates at 8km resolution produced from the current operational network of National Weather Service (NWS) radars, to investigate the relation between cloud type and rain rate over three regions of the continental United States and adjacent waters. The results indicate a substantially lower amount of available moisture in the Front Range of the Rocky Mountains than in the Midwest or in the eastern Gulf of Mexico.

  3. Multidate mapping of mosquito habitat. [Nebraska, South Dakota

    NASA Technical Reports Server (NTRS)

    Woodzick, T. L.; Maxwell, E. L.

    1977-01-01

    LANDSAT data from three overpasses formed the data base for a multidate classification of 15 ground cover categories in the margins of Lewis and Clark Lake, a fresh water impoundment between South Dakota and Nebraska. When scaled to match topographic maps of the area, the ground cover classification maps were used as a general indicator of potential mosquito-breeding habitat by distinguishing productive wetlands areas from nonproductive nonwetlands areas. The 12 channel multidate classification was found to have an accuracy 23% higher than the average of the three single date 4 channel classifications.

  4. Accurate mobile malware detection and classification in the cloud.

    PubMed

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  5. The relationship between tree growth patterns and likelihood of mortality: A study of two tree species in the Sierra Nevada

    USGS Publications Warehouse

    Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.

    2007-01-01

    We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.

  6. Applying Cost-Sensitive Extreme Learning Machine and Dissimilarity Integration to Gene Expression Data Classification.

    PubMed

    Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin

    2016-01-01

    Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.

  7. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    PubMed

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.

  8. An on-line BCI for control of hand grasp sequence and holding using adaptive probabilistic neural network.

    PubMed

    Hazrati, Mehrnaz Kh; Erfanian, Abbas

    2008-01-01

    This paper presents a new EEG-based Brain-Computer Interface (BCI) for on-line controlling the sequence of hand grasping and holding in a virtual reality environment. The goal of this research is to develop an interaction technique that will allow the BCI to be effective in real-world scenarios for hand grasp control. Moreover, for consistency of man-machine interface, it is desirable the intended movement to be what the subject imagines. For this purpose, we developed an on-line BCI which was based on the classification of EEG associated with imagination of the movement of hand grasping and resting state. A classifier based on probabilistic neural network (PNN) was introduced for classifying the EEG. The PNN is a feedforward neural network that realizes the Bayes decision discriminant function by estimating probability density function using mixtures of Gaussian kernels. Two types of classification schemes were considered here for on-line hand control: adaptive and static. In contrast to static classification, the adaptive classifier was continuously updated on-line during recording. The experimental evaluation on six subjects on different days demonstrated that by using the static scheme, a classification accuracy as high as the rate obtained by the adaptive scheme can be achieved. At the best case, an average classification accuracy of 93.0% and 85.8% was obtained using adaptive and static scheme, respectively. The results obtained from more than 1500 trials on six subjects showed that interactive virtual reality environment can be used as an effective tool for subject training in BCI.

  9. An automatic device for detection and classification of malaria parasite species in thick blood film.

    PubMed

    Kaewkamnerd, Saowaluck; Uthaipibull, Chairat; Intarapanich, Apichart; Pannarut, Montri; Chaotheing, Sastra; Tongsima, Sissades

    2012-01-01

    Current malaria diagnosis relies primarily on microscopic examination of Giemsa-stained thick and thin blood films. This method requires vigorously trained technicians to efficiently detect and classify the malaria parasite species such as Plasmodium falciparum (Pf) and Plasmodium vivax (Pv) for an appropriate drug administration. However, accurate classification of parasite species is difficult to achieve because of inherent technical limitations and human inconsistency. To improve performance of malaria parasite classification, many researchers have proposed automated malaria detection devices using digital image analysis. These image processing tools, however, focus on detection of parasites on thin blood films, which may not detect the existence of parasites due to the parasite scarcity on the thin blood film. The problem is aggravated with low parasitemia condition. Automated detection and classification of parasites on thick blood films, which contain more numbers of parasite per detection area, would address the previous limitation. The prototype of an automatic malaria parasite identification system is equipped with mountable motorized units for controlling the movements of objective lens and microscope stage. This unit was tested for its precision to move objective lens (vertical movement, z-axis) and microscope stage (in x- and y-horizontal movements). The average precision of x-, y- and z-axes movements were 71.481 ± 7.266 μm, 40.009 ± 0.000 μm, and 7.540 ± 0.889 nm, respectively. Classification of parasites on 60 Giemsa-stained thick blood films (40 blood films containing infected red blood cells and 20 control blood films of normal red blood cells) was tested using the image analysis module. By comparing our results with the ones verified by trained malaria microscopists, the prototype detected parasite-positive and parasite-negative blood films at the rate of 95% and 68.5% accuracy, respectively. For classification performance, the thick blood films with Pv parasite was correctly classified with the success rate of 75% while the accuracy of Pf classification was 90%. This work presents an automatic device for both detection and classification of malaria parasite species on thick blood film. The system is based on digital image analysis and featured with motorized stage units, designed to easily be mounted on most conventional light microscopes used in the endemic areas. The constructed motorized module could control the movements of objective lens and microscope stage at high precision for effective acquisition of quality images for analysis. The analysis program could accurately classify parasite species, into Pf or Pv, based on distribution of chromatin size.

  10. An experiment in multispectral, multitemporal crop classification using relaxation techniques

    NASA Technical Reports Server (NTRS)

    Davis, L. S.; Wang, C.-Y.; Xie, H.-C

    1983-01-01

    The paper describes the result of an experimental study concerning the use of probabilistic relaxation for improving pixel classification rates. Two LACIE sites were used in the study and in both cases, relaxation resulted in a marked improvement in classification rates.

  11. Fatty infiltration of the minor salivary glands is a selective feature of aging but not Sjögren's syndrome.

    PubMed

    Leehan, Kerry M; Pezant, Nathan P; Rasmussen, Astrid; Grundahl, Kiely; Moore, Jacen S; Radfar, Lida; Lewis, David M; Stone, Donald U; Lessard, Christopher J; Rhodus, Nelson L; Segal, Barbara M; Kaufman, C Erick; Scofield, R Hal; Sivils, Kathy L; Montgomery, Courtney; Farris, A Darise

    2017-12-01

    Determine the presence and assess the extent of fatty infiltration of the minor salivary glands (SG) of primary SS patients (pSS) as compared to those with non-SS sicca (nSS). Minor SG biopsy samples from 134 subjects with pSS (n = 72) or nSS (n = 62) were imaged. Total area and fatty replacement area for each glandular cross-section (n = 4-6 cross-sections per subject) were measured using Image J (National Institutes of Health, Bethesda, MD). The observer was blinded to subject classification status. The average area of fatty infiltration calculated per subject was evaluated by logistic regression and general linearized models (GLM) to assess relationships between fatty infiltration and clinical exam results, extent of fibrosis and age. The average area of fatty infiltration for subjects with pSS (median% (range) 4.97 (0.05-30.2)) was not significantly different from that of those with nSS (3.75 (0.087-41.9). Infiltration severity varied widely, and subjects with fatty replacement greater than 6% were equivalently distributed between pSS and nSS participants (χ 2 p = .50). Age accounted for all apparent relationships between fatty infiltration and fibrosis or reduced saliva flow. The all-inclusive GLM for prediction of pSS versus non-SS classification including fibrosis, age, fatty replacement, and focus score was not significantly different from any desaturated model. In no iteration of the model did fatty replacement exert a significant effect on the capacity to predict pSS classification. Fatty infiltration is an age-associated phenomenon and not a selective feature of Sjögren's syndrome. Sicca patients who do not fulfil pSS criteria have similar rates of fatty infiltration of the minor SG.

  12. [Prevalence of hearing impairment in northwestern Germany. Results of an epidemiological study on hearing status (HÖRSTAT)].

    PubMed

    von Gablenz, P; Holube, I

    2015-03-01

    A pure-tone average of 0.5, 1, 2, and 4 kHz in the better ear (PTA-4) is the international standard criterion set by the World Health Organization (WHO) to describe hearing loss. Presently, there are no epidemiological data on hearing loss in Germany based on this criterion. A representative sample of adults from Oldenburg and Emden were invited for a hearing assessment. This article analyzes the association between hearing loss and age, sex, noise, occupation, and educational level. Age- and sex-specific prevalence rates following the WHO classification are compared with international findings. According to the WHO classification, the prevalence of hearing impairment in the study cohort (n=1,866) is approx. 16%. In men, who more commonly work in noisy jobs, a higher prevalence rate is observed than in women of the same age. Nevertheless, sex differences in the present study are smaller than those reported in most international studies. PTA-4 is approximately the same for men and women when effects of occupational noise are controlled, but differences in prevalence between occupational areas are still significant. Compared with international findings, age-specific prevalence rates in HÖRSTAT are low. In the synopsis of epidemiological studies of the past 25 years, a trend toward decreasing prevalence in middle and higher age groups can be observed.

  13. An opto-electronic joint detection system based on DSP aiming at early cervical cancer screening

    NASA Astrophysics Data System (ADS)

    Wang, Weiya; Jia, Mengyu; Gao, Feng; Yang, Lihong; Qu, Pengpeng; Zou, Changping; Liu, Pengxi; Zhao, Huijuan

    2015-02-01

    The cervical cancer screening at a pre-cancer stage is beneficial to reduce the mortality of women. An opto-electronic joint detection system based on DSP aiming at early cervical cancer screening is introduced in this paper. In this system, three electrodes alternately discharge to the cervical tissue and three light emitting diodes in different wavelengths alternately irradiate the cervical tissue. Then the relative optical reflectance and electrical voltage attenuation curve are obtained by optical and electrical detection, respectively. The system is based on DSP to attain the portable and cheap instrument. By adopting the relative reflectance and the voltage attenuation constant, the classification algorithm based on Support Vector Machine (SVM) discriminates abnormal cervical tissue from normal. We use particle swarm optimization to optimize the two key parameters of SVM, i.e. nuclear factor and cost factor. The clinical data were collected on 313 patients to build a clinical database of tissue responses under optical and electrical stimulations with the histopathologic examination as the gold standard. The classification result shows that the opto-electronic joint detection has higher total coincidence rate than separate optical detection or separate electrical detection. The sensitivity, specificity, and total coincidence rate increase with the increasing of sample numbers in the training set. The average total coincidence rate of the system can reach 85.1% compared with the histopathologic examination.

  14. Estimating the Classification Efficiency of a Test Battery.

    ERIC Educational Resources Information Center

    De Corte, Wilfried

    2000-01-01

    Shows how a theorem proven by H. Brogden (1951, 1959) can be used to estimate the allocation average (a predictor based classification of a test battery) assuming that the predictor intercorrelations and validities are known and that the predictor variables have a joint multivariate normal distribution. (SLD)

  15. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing

    PubMed Central

    Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis G.; Atkins, David C.; Narayanan, Shrikanth S.

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy–observational coding–has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies. PMID:26630392

  16. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  17. 7 CFR 400.304 - Nonstandard Classification determinations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... changes are necessary in assigned yields or premium rates under the conditions set forth in § 400.304(f... Classification determinations. (a) Nonstandard Classification determinations can affect a change in assigned yields, premium rates, or both from those otherwise prescribed by the insurance actuarial tables. (b...

  18. Regulation of IAP (Inhibitor of Apoptosis) Gene Expression by the p53 Tumor Suppressor Protein

    DTIC Science & Technology

    2005-05-01

    adenovirus, gene therapy, polymorphism, 31 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20...averaged results of three inde- pendent experiments, with standard error. Right panel: Level of p53 in infected cells using the antibody Ab-6 (Calbiochem...with highly purified mitochondria as described in (2). The arrow marks oligomerized BAK. The right _ -. panel depicts the purity of BMH CrosIinked Mito

  19. Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance

    NASA Astrophysics Data System (ADS)

    Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan

    2016-03-01

    Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.

  20. Correlation of the Rock Mass Rating (RMR) System with the Unified Soil Classification System (USCS): Introduction of the Weak Rock Mass Rating System (W-RMR)

    NASA Astrophysics Data System (ADS)

    Warren, Sean N.; Kallu, Raj R.; Barnard, Chase K.

    2016-11-01

    Underground gold mines in Nevada are exploiting increasingly deeper ore bodies comprised of weak to very weak rock masses. The Rock Mass Rating (RMR) classification system is widely used at underground gold mines in Nevada and is applicable in fair to good-quality rock masses, but is difficult to apply and loses reliability in very weak rock mass to soil-like material. Because very weak rock masses are transition materials that border engineering rock mass and soil classification systems, soil classification may sometimes be easier and more appropriate to provide insight into material behavior and properties. The Unified Soil Classification System (USCS) is the most likely choice for the classification of very weak rock mass to soil-like material because of its accepted use in tunnel engineering projects and its ability to predict soil-like material behavior underground. A correlation between the RMR and USCS systems was developed by comparing underground geotechnical RMR mapping to laboratory testing of bulk samples from the same locations, thereby assigning a numeric RMR value to the USCS classification that can be used in spreadsheet calculations and geostatistical analyses. The geotechnical classification system presented in this paper including a USCS-RMR correlation, RMR rating equations, and the Geo-Pick Strike Index is collectively introduced as the Weak Rock Mass Rating System (W-RMR). It is the authors' hope that this system will aid in the classification of weak rock masses and more usable design tools based on the RMR system. More broadly, the RMR-USCS correlation and the W-RMR system help define the transition between engineering soil and rock mass classification systems and may provide insight for geotechnical design in very weak rock masses.

  1. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  2. Outcome of Surgical Fixation of Lateral Column Distal Humerus Fractures.

    PubMed

    Von Keudell, Arvind; Kachooei, Amir R; Moradi, Ali; Jupiter, Jesse B

    2016-05-01

    The purpose of this study was to report the long-term outcome and complications of surgically fixated lateral unicondylar distal humerus fractures. Retrospective Review. Two level 1 Trauma Centers, Massachusetts General Hospital and Brigham and Women's Hospital. Between 2002 and 2014, 24 patients treated with open reduction and internal fixation for lateral unicondylar distal humerus fractures (OTA/AO type B1 fractures) were retrospectively reviewed. Open reduction and internal fixation. Union rates, early complications, functional outcome, and the range of elbow motion were evaluated. Disabilities of the arm, shoulder, and hand, Mayo elbow Performance Index, satisfaction, pain scale, and American Shoulder and Elbow Surgeons. The mean age of patients was 46 ± 23 years at the time of surgery. The average final flexion/extension arc of motion was 108°. Reoperations were performed in 9 of 24 elbows after an average 21 ± 31 months. Twenty of the 24 patients were available for the clinical follow-up at an average of 70 months (range: 16-144 months). Disabilities of the arm, shoulder, and hand averaged at 10.8 ± 11.7 points, satisfaction at 9.5 ± 1.2, American Shoulder and Elbow Surgeons score at 88.5 ± 13.3 points at final follow-up. Based on the functional classification proposed by Jupiter, 16 demonstrated good to excellent results, 2 fair and 2 poor result. Outcome of open reduction and internal fixation of isolated lateral column distal humerus fractures can result in high union rates with acceptable outcome scores and high patient satisfaction despite a high reoperation rate. Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.

  3. A nearest neighbor approach for automated transporter prediction and categorization from protein sequences.

    PubMed

    Li, Haiquan; Dai, Xinbin; Zhao, Xuechun

    2008-05-01

    Membrane transport proteins play a crucial role in the import and export of ions, small molecules or macromolecules across biological membranes. Currently, there are a limited number of published computational tools which enable the systematic discovery and categorization of transporters prior to costly experimental validation. To approach this problem, we utilized a nearest neighbor method which seamlessly integrates homologous search and topological analysis into a machine-learning framework. Our approach satisfactorily distinguished 484 transporter families in the Transporter Classification Database, a curated and representative database for transporters. A five-fold cross-validation on the database achieved a positive classification rate of 72.3% on average. Furthermore, this method successfully detected transporters in seven model and four non-model organisms, ranging from archaean to mammalian species. A preliminary literature-based validation has cross-validated 65.8% of our predictions on the 11 organisms, including 55.9% of our predictions overlapping with 83.6% of the predicted transporters in TransportDB.

  4. Feature Extraction and Selection for Myoelectric Control Based on Wearable EMG Sensors.

    PubMed

    Phinyomark, Angkoon; N Khushaba, Rami; Scheme, Erik

    2018-05-18

    Specialized myoelectric sensors have been used in prosthetics for decades, but, with recent advancements in wearable sensors, wireless communication and embedded technologies, wearable electromyographic (EMG) armbands are now commercially available for the general public. Due to physical, processing, and cost constraints, however, these armbands typically sample EMG signals at a lower frequency (e.g., 200 Hz for the Myo armband) than their clinical counterparts. It remains unclear whether existing EMG feature extraction methods, which largely evolved based on EMG signals sampled at 1000 Hz or above, are still effective for use with these emerging lower-bandwidth systems. In this study, the effects of sampling rate (low: 200 Hz vs. high: 1000 Hz) on the classification of hand and finger movements were evaluated for twenty-six different individual features and eight sets of multiple features using a variety of datasets comprised of both able-bodied and amputee subjects. The results show that, on average, classification accuracies drop significantly ( p.

  5. 77 FR 39747 - Changes in Postal Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... with the Commission of a proposal characterized as a minor classification change under 39 CFR parts 3090 and 3091, along with a conforming revision to the Mail Classification Schedule (MCS).\\1\\ The... Flat Rate Envelope options. \\1\\ Notice of United States Postal Service of Classification Changes, June...

  6. Perforated peptic ulcer: clinical presentation, surgical outcomes, and the accuracy of the Boey scoring system in predicting postoperative morbidity and mortality.

    PubMed

    Lohsiriwat, Varut; Prapasrivorakul, Siriluck; Lohsiriwat, Darin

    2009-01-01

    The purposes of this study were to determine clinical presentations and surgical outcomes of perforated peptic ulcer (PPU), and to evaluate the accuracy of the Boey scoring system in predicting mortality and morbidity. We carried out a retrospective study of patients undergoing emergency surgery for PPU between 2001 and 2006 in a university hospital. Clinical presentations and surgical outcomes were analyzed. Adjusted odds ratio (OR) of each Boey score on morbidity and mortality rate was compared with zero risk score. Receiver-operating characteristic curve analysis was used to compare the predictive ability between Boey score, American Society of Anesthesiologists (ASA) classification, and Mannheim Peritonitis Index (MPI). The study included 152 patients with average age of 52 years (range: 15-88 years), and 78% were male. The most common site of PPU was the prepyloric region (74%). Primary closure and omental graft was the most common procedure performed. Overall mortality rate was 9% and the complication rate was 30%. The mortality rate increased progressively with increasing numbers of the Boey score: 1%, 8% (OR=2.4), 33% (OR=3.5), and 38% (OR=7.7) for 0, 1, 2, and 3 scores, respectively (p<0.001). The morbidity rates for 0, 1, 2, and 3 Boey scores were 11%, 47% (OR=2.9), 75% (OR=4.3), and 77% (OR=4.9), respectively (p<0.001). Boey score and ASA classification appeared to be better than MPI for predicting the poor surgical outcomes. Perforated peptic ulcer is associated with high rates of mortality and morbidity. The Boey risk score serves as a simple and precise predictor for postoperative mortality and morbidity.

  7. Average speaking fundamental frequency in soprano singers with and without symptoms of vocal attrition.

    PubMed

    Drew, R; Sapir, S

    1995-06-01

    Nineteen trained soprano singers aged 18-30 years vocalized tasks designed to assess average speaking fundamental frequency (SFF) during spontaneous speaking and reading. Vocal range and perceptual characteristics while singing with low intensity and high frequency were also assessed, and subjects completed a survey of vocal habits/symptoms. Recorded signals were digitized prior to being analyzed for SFF using the Kay Computerized Speech Lab program. Subjects were assigned to a normal voice or impaired voice group based on ratings of perceptual tasks and survey results. Data analysis showed group differences in mean SFF, no differences in vocal range, higher mean SFF values for reading than speaking, and 58% ability to perceive speaking in low pitch. The role of speaking in too low pitch as causal for vocal symptoms and need for voice classification differentiation in vocal performance studies are discussed.

  8. Does ASA classification impact success rates of endovascular aneurysm repairs?

    PubMed

    Conners, Michael S; Tonnessen, Britt H; Sternbergh, W Charles; Carter, Glen; Yoselevitz, Moises; Money, Samuel R

    2002-09-01

    The purpose of this study was to evaluate the technical success, clinical success, postoperative complication rate, need for a secondary procedure, and mortality rate with endovascular aneurysm repair (EAR), based on the physical status classification scheme advocated by the American Society of Anesthesiologists (ASA). At a single institution 167 patients underwent attempted EAR. Query of a prospectively maintained database supplemented with a retrospective review of medical records was used to gather statistics pertaining to patient demographics and outcome. In patients selected for EAR on the basis of acceptable anatomy, technical and clinical success rates were not significantly different among the different ASA classifications. Importantly, postoperative complication and 30-day mortality rates do not appear to significantly differ among the different ASA classifications in this patient population.

  9. Compensatory neurofuzzy model for discrete data classification in biomedical

    NASA Astrophysics Data System (ADS)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  10. Autonomous target recognition using remotely sensed surface vibration measurements

    NASA Astrophysics Data System (ADS)

    Geurts, James; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.; Barr, Dallas N.

    1993-09-01

    The remotely measured surface vibration signatures of tactical military ground vehicles are investigated for use in target classification and identification friend or foe (IFF) systems. The use of remote surface vibration sensing by a laser radar reduces the effects of partial occlusion, concealment, and camouflage experienced by automatic target recognition systems using traditional imagery in a tactical battlefield environment. Linear Predictive Coding (LPC) efficiently represents the vibration signatures and nearest neighbor classifiers exploit the LPC feature set using a variety of distortion metrics. Nearest neighbor classifiers achieve an 88 percent classification rate in an eight class problem, representing a classification performance increase of thirty percent from previous efforts. A novel confidence figure of merit is implemented to attain a 100 percent classification rate with less than 60 percent rejection. The high classification rates are achieved on a target set which would pose significant problems to traditional image-based recognition systems. The targets are presented to the sensor in a variety of aspects and engine speeds at a range of 1 kilometer. The classification rates achieved demonstrate the benefits of using remote vibration measurement in a ground IFF system. The signature modeling and classification system can also be used to identify rotary and fixed-wing targets.

  11. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  12. Comparative Approach of MRI-Based Brain Tumor Segmentation and Classification Using Genetic Algorithm.

    PubMed

    Bahadure, Nilesh Bhaskarrao; Ray, Arun Kumar; Thethi, Har Pal

    2018-01-17

    The detection of a brain tumor and its classification from modern imaging modalities is a primary concern, but a time-consuming and tedious work was performed by radiologists or clinical supervisors. The accuracy of detection and classification of tumor stages performed by radiologists is depended on their experience only, so the computer-aided technology is very important to aid with the diagnosis accuracy. In this study, to improve the performance of tumor detection, we investigated comparative approach of different segmentation techniques and selected the best one by comparing their segmentation score. Further, to improve the classification accuracy, the genetic algorithm is employed for the automatic classification of tumor stage. The decision of classification stage is supported by extracting relevant features and area calculation. The experimental results of proposed technique are evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on segmentation score, accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 92.03% accuracy, 91.42% specificity, 92.36% sensitivity, and an average segmentation score between 0.82 and 0.93 demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 93.79% dice similarity index coefficient, which indicates better overlap between the automated extracted tumor regions with manually extracted tumor region by radiologists.

  13. Epidemiology of Hospitalizations Associated with Invasive Candidiasis, United States, 2002–20121

    PubMed Central

    Strollo, Sara; Lionakis, Michail S.; Adjemian, Jennifer; Steiner, Claudia A.

    2017-01-01

    Invasive candidiasis is a major nosocomial fungal disease in the United States associated with high rates of illness and death. We analyzed inpatient hospitalization records from the Healthcare Cost and Utilization Project to estimate incidence of invasive candidiasis–associated hospitalizations in the United States. We extracted data for 33 states for 2002–2012 by using codes from the International Classification of Diseases, 9th Revision, Clinical Modification, for invasive candidiasis; we excluded neonatal cases. The overall age-adjusted average annual rate was 5.3 hospitalizations/100,000 population. Highest risk was for adults >65 years of age, particularly men. Median length of hospitalization was 21 days; 22% of patients died during hospitalization. Median unadjusted associated cost for inpatient care was $46,684. Age-adjusted annual rates decreased during 2005–2012 for men (annual change –3.9%) and women (annual change –4.5%) and across nearly all age groups. We report a high mortality rate and decreasing incidence of hospitalizations for this disease. PMID:27983497

  14. Epidemiology of Hospitalizations Associated with Invasive Candidiasis, United States, 2002-20121.

    PubMed

    Strollo, Sara; Lionakis, Michail S; Adjemian, Jennifer; Steiner, Claudia A; Prevots, D Rebecca

    2016-01-01

    Invasive candidiasis is a major nosocomial fungal disease in the United States associated with high rates of illness and death. We analyzed inpatient hospitalization records from the Healthcare Cost and Utilization Project to estimate incidence of invasive candidiasis-associated hospitalizations in the United States. We extracted data for 33 states for 2002-2012 by using codes from the International Classification of Diseases, 9th Revision, Clinical Modification, for invasive candidiasis; we excluded neonatal cases. The overall age-adjusted average annual rate was 5.3 hospitalizations/100,000 population. Highest risk was for adults >65 years of age, particularly men. Median length of hospitalization was 21 days; 22% of patients died during hospitalization. Median unadjusted associated cost for inpatient care was $46,684. Age-adjusted annual rates decreased during 2005-2012 for men (annual change -3.9%) and women (annual change -4.5%) and across nearly all age groups. We report a high mortality rate and decreasing incidence of hospitalizations for this disease.

  15. Effect of Subliminal Lexical Priming on the Subjective Perception of Images: A Machine Learning Approach.

    PubMed

    Mohan, Dhanya Menoth; Kumar, Parmod; Mahmood, Faisal; Wong, Kian Foong; Agrawal, Abhishek; Elgendi, Mohamed; Shukla, Rohit; Ang, Natania; Ching, April; Dauwels, Justin; Chan, Alice H D

    2016-01-01

    The purpose of the study is to examine the effect of subliminal priming in terms of the perception of images influenced by words with positive, negative, and neutral emotional content, through electroencephalograms (EEGs). Participants were instructed to rate how much they like the stimuli images, on a 7-point Likert scale, after being subliminally exposed to masked lexical prime words that exhibit positive, negative, and neutral connotations with respect to the images. Simultaneously, the EEGs were recorded. Statistical tests such as repeated measures ANOVAs and two-tailed paired-samples t-tests were performed to measure significant differences in the likability ratings among the three prime affect types; the results showed a strong shift in the likeness judgment for the images in the positively primed condition compared to the other two. The acquired EEGs were examined to assess the difference in brain activity associated with the three different conditions. The consistent results obtained confirmed the overall priming effect on participants' explicit ratings. In addition, machine learning algorithms such as support vector machines (SVMs), and AdaBoost classifiers were applied to infer the prime affect type from the ERPs. The highest classification rates of 95.0% and 70.0% obtained respectively for average-trial binary classifier and average-trial multi-class further emphasize that the ERPs encode information about the different kinds of primes.

  16. Effect of Subliminal Lexical Priming on the Subjective Perception of Images: A Machine Learning Approach

    PubMed Central

    Mahmood, Faisal; Wong, Kian Foong; Agrawal, Abhishek; Elgendi, Mohamed; Shukla, Rohit; Ang, Natania; Ching, April; Dauwels, Justin; Chan, Alice H. D.

    2016-01-01

    The purpose of the study is to examine the effect of subliminal priming in terms of the perception of images influenced by words with positive, negative, and neutral emotional content, through electroencephalograms (EEGs). Participants were instructed to rate how much they like the stimuli images, on a 7-point Likert scale, after being subliminally exposed to masked lexical prime words that exhibit positive, negative, and neutral connotations with respect to the images. Simultaneously, the EEGs were recorded. Statistical tests such as repeated measures ANOVAs and two-tailed paired-samples t-tests were performed to measure significant differences in the likability ratings among the three prime affect types; the results showed a strong shift in the likeness judgment for the images in the positively primed condition compared to the other two. The acquired EEGs were examined to assess the difference in brain activity associated with the three different conditions. The consistent results obtained confirmed the overall priming effect on participants’ explicit ratings. In addition, machine learning algorithms such as support vector machines (SVMs), and AdaBoost classifiers were applied to infer the prime affect type from the ERPs. The highest classification rates of 95.0% and 70.0% obtained respectively for average-trial binary classifier and average-trial multi-class further emphasize that the ERPs encode information about the different kinds of primes. PMID:26866807

  17. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  18. Patients' age, myoma size, myoma location, and interval between myomectomy and pregnancy may influence the pregnancy rate and live birth rate after myomectomy.

    PubMed

    Zhang, Ying; Hua, Ke Qin

    2014-02-01

    To investigate which clinical characteristics will influence the pregnancy rate and live birth rate after myomectomy. Data of clinical characteristics and reproductive outcome from 471 patients who wished to conceive and who underwent abdominal or laparoscopic myomectomy in the Obstetrics and Gynecology Hospital of Fudan University from January 2008 to June 2012 were retrospectively analyzed. Average age in the pregnancy group (30.0±3.7 years) and the nonpregnancy group (31.2±4.1 years) was statistically different (P=.000). The diameter of the biggest myoma had a positive relationship with the pregnancy rate when it was <10 cm (rs=0.095, P=.039). Abortions before myomectomy, operation type, number, location, and classification of myomas, uterine cavity penetration, and uterine volume seemed not to influence the pregnancy rate (P>.05). The location of the myoma may influence the live birth rate after myomectomy (rs=0.198, P=.002). Anterior and posterior myomas were associated with higher live birth rates than other locations (P=.001). The average interval between myomectomy and pregnancy was 16.0±8.7 months, and there was no difference between the abdominal (17.2±8.6 months) and laparoscopic (15.2±8.8 months) groups (P=.102). The interval in the live birth group was 15.0±8.4 months, and that in the non-live birth group was 18.9±9.3 months; the difference was significant (P=.005). Patients' age, myoma size and location, and interval between myomectomy and pregnancy may influence the pregnancy rate and live birth rate after myomectomy.

  19. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  20. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  1. Automated Detection of Atrial Fibrillation Based on Time-Frequency Analysis of Seismocardiograms.

    PubMed

    Hurnanen, Tero; Lehtonen, Eero; Tadi, Mojtaba Jafari; Kuusela, Tom; Kiviniemi, Tuomas; Saraste, Antti; Vasankari, Tuija; Airaksinen, Juhani; Koivisto, Tero; Pankaala, Mikko

    2017-09-01

    In this paper, a novel method to detect atrial fibrillation (AFib) from a seismocardiogram (SCG) is presented. The proposed method is based on linear classification of the spectral entropy and a heart rate variability index computed from the SCG. The performance of the developed algorithm is demonstrated on data gathered from 13 patients in clinical setting. After motion artifact removal, in total 119 min of AFib data and 126 min of sinus rhythm data were considered for automated AFib detection. No other arrhythmias were considered in this study. The proposed algorithm requires no direct heartbeat peak detection from the SCG data, which makes it tolerant against interpersonal variations in the SCG morphology, and noise. Furthermore, the proposed method relies solely on the SCG and needs no complementary electrocardiography to be functional. For the considered data, the detection method performs well even on relatively low quality SCG signals. Using a majority voting scheme that takes five randomly selected segments from a signal and classifies these segments using the proposed algorithm, we obtained an average true positive rate of [Formula: see text] and an average true negative rate of [Formula: see text] for detecting AFib in leave-one-out cross-validation. This paper facilitates adoption of microelectromechanical sensor based heart monitoring devices for arrhythmia detection.

  2. Functional outcome of open reduction and internal fixation for completely unstable pelvic ring fractures (type C): a report of 40 cases.

    PubMed

    Kabak, Sevki; Halici, Mehmet; Tuncel, Mehmet; Avsarogullari, Levent; Baktir, Ali; Basturk, Mustafa

    2003-09-01

    To evaluate functional outcomes, morbidity and mortality rates, and psychological and psychosomatic status in patients treated for completely unstable pelvic injuries (Tile class C). Prospective clinical study. University hospital. Forty patients treated with anterior and posterior internal fixation for unstable pelvic ring fractures between January 1992 and August 1999. Open reduction and anterior and posterior internal fixation of the pelvic ring. The data were analyzed as follows: pelvic fracture classification, Tile classification; severity of trauma, Injury Severity Score (ISS); functional outcomes, the Majeed Outcome Scale; psychological and psychosomatic status, Hamilton Depression and Anxiety Rating Score (HDARS). Preoperatively the average ISS was 29.4 (range 12-66). There was a statistically significant positive correlation between anxiety and ISS (r = 0.536, P < 0.01). Two patients died during the early postoperative period. Two additional patients were lost to follow-up, leaving 36 patients followed for an average of 45 months (range 21-116 months). Deep infections developed in three patients with a posterior pelvic ring injury who had been treated with percutaneous fixation techniques. These were treated successfully with débridement. Nine patients complained of pain of pelvic origin. Nerve deficits recovered completely in four of the seven patients with preoperative neurologic deficiency. Moderate or major depression was diagnosed in sexually dysfunctional patients in the 12th postoperative month according to HDARS (r = -0.559, P < 0.001). At the last visit, there was an inverse correlation between ability to work and depression and anxiety (r = -0.551, r = -0.391). An inverse correlation was found between pain and ability to work (r = 0.597, P < 0.001). Of the 36 patients, 26 returned to their original jobs at the last follow-up visit. Morbidity and mortality rates are higher in patients with a completely unstable pelvic ring injury. Emergency department stabilization and reconstruction of the pelvic ring with optimal operative techniques in these patients can reduce morbidity and mortality rates. Anterior and posterior internal fixation results in satisfactory clinical and radiologic outcomes. The affective status of patients is an important aspect that should be considered during the entire care of the patient.

  3. Prognostic Value and Reproducibility of Pretreatment CT Texture Features in Stage III Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, David V.; Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas; Tucker, Susan L.

    2014-11-15

    Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation wasmore » used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78.8% (±3.9% SD) classification reproducibility in terms of OS, LRC, and FFDM, respectively. Conclusions: Pretreatment tumor texture may provide prognostic information beyond that obtained from CPFs. Models incorporating feature reproducibility achieved classification rates of ∼80%. External validation would be required to establish texture as a prognostic factor.« less

  4. Diagnostic classification of macular ganglion cell and retinal nerve fiber layer analysis: differentiation of false-positives from glaucoma.

    PubMed

    Kim, Ko Eun; Jeoung, Jin Wook; Park, Ki Ho; Kim, Dong Myung; Kim, Seok Hwan

    2015-03-01

    To investigate the rate and associated factors of false-positive diagnostic classification of ganglion cell analysis (GCA) and retinal nerve fiber layer (RNFL) maps, and characteristic false-positive patterns on optical coherence tomography (OCT) deviation maps. Prospective, cross-sectional study. A total of 104 healthy eyes of 104 normal participants. All participants underwent peripapillary and macular spectral-domain (Cirrus-HD, Carl Zeiss Meditec Inc, Dublin, CA) OCT scans. False-positive diagnostic classification was defined as yellow or red color-coded areas for GCA and RNFL maps. Univariate and multivariate logistic regression analyses were used to determine associated factors. Eyes with abnormal OCT deviation maps were categorized on the basis of the shape and location of abnormal color-coded area. Differences in clinical characteristics among the subgroups were compared. (1) The rate and associated factors of false-positive OCT maps; (2) patterns of false-positive, color-coded areas on the GCA deviation map and associated clinical characteristics. Of the 104 healthy eyes, 42 (40.4%) and 32 (30.8%) showed abnormal diagnostic classifications on any of the GCA and RNFL maps, respectively. Multivariate analysis revealed that false-positive GCA diagnostic classification was associated with longer axial length and larger fovea-disc angle, whereas longer axial length and smaller disc area were associated with abnormal RNFL maps. Eyes with abnormal GCA deviation map were categorized as group A (donut-shaped round area around the inner annulus), group B (island-like isolated area), and group C (diffuse, circular area with an irregular inner margin in either). The axial length showed a significant increasing trend from group A to C (P=0.001), and likewise, the refractive error was more myopic in group C than in groups A (P=0.015) and B (P=0.014). Group C had thinner average ganglion cell-inner plexiform layer thickness compared with other groups (group A=B>C, P=0.004). Abnormal OCT diagnostic classification should be interpreted with caution, especially in eyes with long axial lengths, large fovea-disc angles, and small optic discs. Our findings suggest that the characteristic patterns of OCT deviation map can provide useful clues to distinguish glaucomatous changes from false-positive findings. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. Probability-based classifications for spatially characterizing the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region, Taiwan.

    PubMed

    Jang, Cheng-Shin

    2015-05-01

    Accurately classifying the spatial features of the water temperatures and discharge rates of hot springs is crucial for environmental resources use and management. This study spatially characterized classifications of the water temperatures and discharge rates of hot springs in the Tatun Volcanic Region of Northern Taiwan by using indicator kriging (IK). The water temperatures and discharge rates of the springs were first assigned to high, moderate, and low categories according to the two thresholds of the proposed spring classification criteria. IK was then used to model the occurrence probabilities of the water temperatures and discharge rates of the springs and probabilistically determine their categories. Finally, nine combinations were acquired from the probability-based classifications for the spatial features of the water temperatures and discharge rates of the springs. Moreover, various combinations of spring water features were examined according to seven subzones of spring use in the study region. The research results reveal that probability-based classifications using IK provide practicable insights related to propagating the uncertainty of classifications according to the spatial features of the water temperatures and discharge rates of the springs. The springs in the Beitou (BT), Xingyi Road (XYR), Zhongshanlou (ZSL), and Lengshuikeng (LSK) subzones are suitable for supplying tourism hotels with a sufficient quantity of spring water because they have high or moderate discharge rates. Furthermore, natural hot springs in riverbeds and valleys should be developed in the Dingbeitou (DBT), ZSL, Xiayoukeng (XYK), and Macao (MC) subzones because of low discharge rates and low or moderate water temperatures.

  6. Voltammetric Electronic Tongue and Support Vector Machines for Identification of Selected Features in Mexican Coffee

    PubMed Central

    Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel

    2014-01-01

    This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure. PMID:25254303

  7. Voltammetric electronic tongue and support vector machines for identification of selected features in Mexican coffee.

    PubMed

    Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel

    2014-09-24

    This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure.

  8. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  9. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  10. Impact of oesophagitis classification in evaluating healing of erosive oesophagitis after therapy with proton pump inhibitors: a pooled analysis.

    PubMed

    Yaghoobi, Mohammad; Padol, Sara; Yuan, Yuhong; Hunt, Richard H

    2010-05-01

    The results of clinical trials with proton pump inhibitors (PPIs) are usually based on the Hetzel-Dent (HD), Savary-Miller (SM), or Los Angeles (LA) classifications to describe the severity and assess the healing of erosive oesophagitis. However, it is not known whether these classifications are comparable. The aim of this study was to review systematically the literature to compare the healing rates of erosive oesophagitis with PPIs in clinical trials assessed by the HD, SM, or LA classifications. A recursive, English language literature search in PubMed and Cochrane databases to December 2006 was performed. Double-blind randomized control trials comparing a PPI with another PPI, an H2-RA or placebo using endoscopic assessment of the healing of oesophagitis by the HD, SM or LA, or their modified classifications at 4 or 8 weeks, were included in the study. The healing rates on treatment with the same PPI(s), and same endoscopic grade(s) were pooled and compared between different classifications using Fisher's exact test or chi2 test where appropriate. Forty-seven studies from 965 potential citations met inclusion criteria. Seventy-eight PPI arms were identified, with 27 using HD, 29 using SM, and 22 using LA for five marketed PPIs. There was insufficient data for rabeprazole and esomeprazole (week 4 only) to compare because they were evaluated by only one classification. When data from all PPIs were pooled, regardless of baseline oesophagitis grades, the LA healing rate was significantly higher than SM and HD at both 4 and 8 weeks (74, 71, and 68% at 4 weeks and 89, 84, and 83% at 8 weeks, respectively). The distribution of different grades in study population was available only for pantoprazole where it was not significantly different between LA and SM subgroups. When analyzing data for PPI and dose, the LA classification showed a higher healing rate for omeprazole 20 mg/day and pantoprazole 40 mg/day (significant at 8 weeks), whereas healing by SM classification was significantly higher for omeprazole 40 mg/day (no data for LA) and lansoprazole 30 mg/day at 4 and 8 weeks. The healing rate by individual oesophagitis grade was not always available or robust enough for meaningful analysis. However, a difference between classifications remained. There is a significant, but not always consistent, difference in oesophagitis healing rates with the same PPI(s) reported by the LA, SM, or HD classifications. The possible difference between grading classifications should be considered when interpreting or comparing healing rates for oesophagitis from different studies.

  11. A feasibility study of treatment verification using EPID cine images for hypofractionated lung radiotherapy

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoli; Lin, Tong; Jiang, Steve

    2009-09-01

    We propose a novel approach for potential online treatment verification using cine EPID (electronic portal imaging device) images for hypofractionated lung radiotherapy based on a machine learning algorithm. Hypofractionated radiotherapy requires high precision. It is essential to effectively monitor the target to ensure that the tumor is within the beam aperture. We modeled the treatment verification problem as a two-class classification problem and applied an artificial neural network (ANN) to classify the cine EPID images acquired during the treatment into corresponding classes—with the tumor inside or outside of the beam aperture. Training samples were generated for the ANN using digitally reconstructed radiographs (DRRs) with artificially added shifts in the tumor location—to simulate cine EPID images with different tumor locations. Principal component analysis (PCA) was used to reduce the dimensionality of the training samples and cine EPID images acquired during the treatment. The proposed treatment verification algorithm was tested on five hypofractionated lung patients in a retrospective fashion. On average, our proposed algorithm achieved a 98.0% classification accuracy, a 97.6% recall rate and a 99.7% precision rate. This work was first presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  12. Authentication of Organically and Conventionally Grown Basils by Gas Chromatography/Mass Spectrometry Chemical Profiles

    PubMed Central

    Wang, Zhengfang; Chen, Pei; Yu, Liangli; Harrington, Peter de B.

    2013-01-01

    Basil plants cultivated by organic and conventional farming practices were accurately classified by pattern recognition of gas chromatography/mass spectrometry (GC/MS) data. A novel extraction procedure was devised to extract characteristic compounds from ground basil powders. Two in-house fuzzy classifiers, i.e., the fuzzy rule-building expert system (FuRES) and the fuzzy optimal associative memory (FOAM) for the first time, were used to build classification models. Two crisp classifiers, i.e., soft independent modeling by class analogy (SIMCA) and the partial least-squares discriminant analysis (PLS-DA), were used as control methods. Prior to data processing, baseline correction and retention time alignment were performed. Classifiers were built with the two-way data sets, the total ion chromatogram representation of data sets, and the total mass spectrum representation of data sets, separately. Bootstrapped Latin partition (BLP) was used as an unbiased evaluation of the classifiers. By using two-way data sets, average classification rates with FuRES, FOAM, SIMCA, and PLS-DA were 100 ± 0%, 94.4 ± 0.4%, 93.3 ± 0.4%, and 100 ± 0%, respectively, for 100 independent evaluations. The established classifiers were used to classify a new validation set collected 2.5 months later with no parametric changes except that the training set and validation set were individually mean-centered. For the new two-way validation set, classification rates with FuRES, FOAM, SIMCA, and PLS-DA were 100%, 83%, 97%, and 100%, respectively. Thereby, the GC/MS analysis was demonstrated as a viable approach for organic basil authentication. It is the first time that a FOAM has been applied to classification. A novel baseline correction method was used also for the first time. The FuRES and the FOAM are demonstrated as powerful tools for modeling and classifying GC/MS data of complex samples and the data pretreatments are demonstrated to be useful to improve the performance of classifiers. PMID:23398171

  13. Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification

    PubMed Central

    2014-01-01

    Background Behavioral interventions such as psychotherapy are leading, evidence-based practices for a variety of problems (e.g., substance abuse), but the evaluation of provider fidelity to behavioral interventions is limited by the need for human judgment. The current study evaluated the accuracy of statistical text classification in replicating human-based judgments of provider fidelity in one specific psychotherapy—motivational interviewing (MI). Method Participants (n = 148) came from five previously conducted randomized trials and were either primary care patients at a safety-net hospital or university students. To be eligible for the original studies, participants met criteria for either problematic drug or alcohol use. All participants received a type of brief motivational interview, an evidence-based intervention for alcohol and substance use disorders. The Motivational Interviewing Skills Code is a standard measure of MI provider fidelity based on human ratings that was used to evaluate all therapy sessions. A text classification approach called a labeled topic model was used to learn associations between human-based fidelity ratings and MI session transcripts. It was then used to generate codes for new sessions. The primary comparison was the accuracy of model-based codes with human-based codes. Results Receiver operating characteristic (ROC) analyses of model-based codes showed reasonably strong sensitivity and specificity with those from human raters (range of area under ROC curve (AUC) scores: 0.62 – 0.81; average AUC: 0.72). Agreement with human raters was evaluated based on talk turns as well as code tallies for an entire session. Generated codes had higher reliability with human codes for session tallies and also varied strongly by individual code. Conclusion To scale up the evaluation of behavioral interventions, technological solutions will be required. The current study demonstrated preliminary, encouraging findings regarding the utility of statistical text classification in bridging this methodological gap. PMID:24758152

  14. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  15. Vitamin D3 Analogues with Low Vitamin D Receptor Binding Affinity Regulate Chondrocyte Proliferation, Proteoglycan Synthesis, and Protein Kinase C Activity

    DTIC Science & Technology

    1997-07-11

    REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour...DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 words) 14. SUBJECT TERMS 15. NUMBER OF PAGES 50 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY...CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF ABSTRACT Standard Form 298(Rev. 2-89) (EG) Prescribed byANSI

  16. Provisional in-silico biopharmaceutics classification (BCS) to guide oral drug product development

    PubMed Central

    Wolk, Omri; Agbaria, Riad; Dahan, Arik

    2014-01-01

    The main objective of this work was to investigate in-silico predictions of physicochemical properties, in order to guide oral drug development by provisional biopharmaceutics classification system (BCS). Four in-silico methods were used to estimate LogP: group contribution (CLogP) using two different software programs, atom contribution (ALogP), and element contribution (KLogP). The correlations (r2) of CLogP, ALogP and KLogP versus measured LogP data were 0.97, 0.82, and 0.71, respectively. The classification of drugs with reported intestinal permeability in humans was correct for 64.3%–72.4% of the 29 drugs on the dataset, and for 81.82%–90.91% of the 22 drugs that are passively absorbed using the different in-silico algorithms. Similar permeability classification was obtained with the various in-silico methods. The in-silico calculations, along with experimental melting points, were then incorporated into a thermodynamic equation for solubility estimations that largely matched the reference solubility values. It was revealed that the effect of melting point on the solubility is minor compared to the partition coefficient, and an average melting point (162.7°C) could replace the experimental values, with similar results. The in-silico methods classified 20.76% (±3.07%) as Class 1, 41.51% (±3.32%) as Class 2, 30.49% (±4.47%) as Class 3, and 6.27% (±4.39%) as Class 4. In conclusion, in-silico methods can be used for BCS classification of drugs in early development, from merely their molecular formula and without foreknowledge of their chemical structure, which will allow for the improved selection, engineering, and developability of candidates. These in-silico methods could enhance success rates, reduce costs, and accelerate oral drug products development. PMID:25284986

  17. Global dengue death before and after the new World Health Organization 2009 case classification: A systematic review and meta-regression analysis.

    PubMed

    Low, Gary Kim-Kuan; Ogston, Simon A; Yong, Mun-Hin; Gan, Seng-Chiew; Chee, Hui-Yee

    2018-06-01

    Since the introduction of 2009 WHO dengue case classification, no literature was found regarding its effect on dengue death. This study was to evaluate the effect of 2009 WHO dengue case classification towards dengue case fatality rate. Various databases were used to search relevant articles since 1995. Studies included were cohort and cross-sectional studies, all patients with dengue infection and must report the number of death or case fatality rate. The Joanna Briggs Institute appraisal checklist was used to evaluate the risk of bias of the full-texts. The studies were grouped according to the classification adopted: WHO 1997 and WHO 2009. Meta-regression was employed using a logistic transformation (log-odds) of the case fatality rate. The result of the meta-regression was the adjusted case fatality rate and odds ratio on the explanatory variables. A total of 77 studies were included in the meta-regression analysis. The case fatality rate for all studies combined was 1.14% with 95% confidence interval (CI) of 0.82-1.58%. The combined (unadjusted) case fatality rate for 69 studies which adopted WHO 1997 dengue case classification was 1.09% with 95% CI of 0.77-1.55%; and for eight studies with WHO 2009 was 1.62% with 95% CI of 0.64-4.02%. The unadjusted and adjusted odds ratio of case fatality using WHO 2009 dengue case classification was 1.49 (95% CI: 0.52, 4.24) and 0.83 (95% CI: 0.26, 2.63) respectively, compared to WHO 1997 dengue case classification. There was an apparent increase in trend of case fatality rate from the year 1992-2016. Neither was statistically significant. The WHO 2009 dengue case classification might have no effect towards the case fatality rate although the adjusted results indicated a lower case fatality rate. Future studies are required for an update in the meta-regression analysis to confirm the findings. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Comparison on three classification techniques for sex estimation from the bone length of Asian children below 19 years old: an analysis using different group of ages.

    PubMed

    Darmawan, M F; Yusuf, Suhaila M; Kadir, M R Abdul; Haron, H

    2015-02-01

    Sex estimation is used in forensic anthropology to assist the identification of individual remains. However, the estimation techniques tend to be unique and applicable only to a certain population. This paper analyzed sex estimation on living individual child below 19 years old using the length of 19 bones of left hand applied for three classification techniques, which were Discriminant Function Analysis (DFA), Support Vector Machine (SVM) and Artificial Neural Network (ANN) multilayer perceptron. These techniques were carried out on X-ray images of the left hand taken from an Asian population data set. All the 19 bones of the left hand were measured using Free Image software, and all the techniques were performed using MATLAB. The group of age "16-19" years old and "7-9" years old were the groups that could be used for sex estimation with as their average of accuracy percentage was above 80%. ANN model was the best classification technique with the highest average of accuracy percentage in the two groups of age compared to other classification techniques. The results show that each classification technique has the best accuracy percentage on each different group of age. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Classification of rollovers according to crash severity.

    PubMed

    Digges, K; Eigen, A

    2006-01-01

    NASS/CDS 1995-2004 was used to classify rollovers according to severity. The rollovers were partitioned into two classes - rollover as the first event and rollover preceded by an impact with a fixed or non-fixed object. The populations of belted and unbelted were examined separately and combined. The average injury rate for the unbelted was five times that for the belted. Approximately 21% of the severe injuries suffered by belted occupants were in crashes with harmful events prior to the rollover that produced severe damage to the vehicle. This group carried a much higher injury risk than the average. A planar damage measure in addition to the rollover measure was required to adequately capture the crash severity of this population. For rollovers as the first event, approximately 1% of the serious injuries to belted occupants occurred during the first quarter-turn. Rollovers that were arrested during the 1 ( st ) quarter-turn carried a higher injury rate than average. The number of quarter-turns were grouped in various ways including the number of times the vehicle roof faces the ground (number of vehicle inversions). The number of vehicle inversions was found to be a statistically significant injury predictor for 78% of the belted and unbelted populations with MAIS 3+F injuries in rollovers. The remaining 22% required crash severity metrics in addition to the number of vehicle inversions.

  20. Classification of Rollovers According to Crash Severity

    PubMed Central

    Digges, K.; Eigen, A.

    2006-01-01

    NASS/CDS 1995–2004 was used to classify rollovers according to severity. The rollovers were partitioned into two classes – rollover as the first event and rollover preceded by an impact with a fixed or non-fixed object. The populations of belted and unbelted were examined separately and combined. The average injury rate for the unbelted was five times that for the belted. Approximately 21% of the severe injuries suffered by belted occupants were in crashes with harmful events prior to the rollover that produced severe damage to the vehicle. This group carried a much higher injury risk than the average. A planar damage measure in addition to the rollover measure was required to adequately capture the crash severity of this population. For rollovers as the first event, approximately 1% of the serious injuries to belted occupants occurred during the first quarter-turn. Rollovers that were arrested during the 1st quarter-turn carried a higher injury rate than average. The number of quarter-turns were grouped in various ways including the number of times the vehicle roof faces the ground (number of vehicle inversions). The number of vehicle inversions was found to be a statistically significant injury predictor for 78% of the belted and unbelted populations with MAIS 3+F injuries in rollovers. The remaining 22% required crash severity metrics in addition to the number of vehicle inversions. PMID:16968634

  1. Binning in Gaussian Kernel Regularization

    DTIC Science & Technology

    2005-04-01

    OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the SVM trained on 27,179 samples, but reduces the...71.40%) on 966 randomly sampled data. Using the OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the...the OSU-SVM Matlab package, the SVM trained on 966 bins has a comparable test classification rate as the SVM trained on 27,179 samples, and reduces

  2. A long-term perspective on deforestation rates in the Brazilian Amazon

    NASA Astrophysics Data System (ADS)

    Velasco Gomez, M. D.; Beuchle, R.; Shimabukuro, Y.; Grecchi, R.; Simonetti, D.; Eva, H. D.; Achard, F.

    2015-04-01

    Monitoring tropical forest cover is central to biodiversity preservation, terrestrial carbon stocks, essential ecosystem and climate functions, and ultimately, sustainable economic development. The Amazon forest is the Earth's largest rainforest, and despite intensive studies on current deforestation rates, relatively little is known as to how these compare to historic (pre 1985) deforestation rates. We quantified land cover change between 1975 and 2014 in the so-called Arc of Deforestation of the Brazilian Amazon, covering the southern stretch of the Amazon forest and part of the Cerrado biome. We applied a consistent method that made use of data from Landsat sensors: Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+) and Operational Land Imager (OLI). We acquired suitable images from the US Geological Survey (USGS) for five epochs: 1975, 1990, 2000, 2010, and 2014. We then performed land cover analysis for each epoch using a systematic sample of 156 sites, each one covering 10 km x 10 km, located at the confluence point of integer degree latitudes and longitudes. An object-based classification of the images was performed with five land cover classes: tree cover, tree cover mosaic, other wooded land, other land cover, and water. The automatic classification results were corrected by visual interpretation, and, when available, by comparison with higher resolution imagery. Our results show a decrease of forest cover of 24.2% in the last 40 years in the Brazilian Arc of Deforestation, with an average yearly net forest cover change rate of -0.71% for the 39 years considered.

  3. Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data

    NASA Astrophysics Data System (ADS)

    Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.

    2014-12-01

    At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.

  4. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  5. Characterization of Escherichia coli isolates from different fecal sources by means of classification tree analysis of fatty acid methyl ester (FAME) profiles.

    PubMed

    Seurinck, Sylvie; Deschepper, Ellen; Deboch, Bishaw; Verstraete, Willy; Siciliano, Steven

    2006-03-01

    Microbial source tracking (MST) methods need to be rapid, inexpensive and accurate. Unfortunately, many MST methods provide a wealth of information that is difficult to interpret by the regulators who use this information to make decisions. This paper describes the use of classification tree analysis to interpret the results of a MST method based on fatty acid methyl ester (FAME) profiles of Escherichia coli isolates, and to present results in a format readily interpretable by water quality managers. Raw sewage E. coli isolates and animal E. coli isolates from cow, dog, gull, and horse were isolated and their FAME profiles collected. Correct classification rates determined with leaveone-out cross-validation resulted in an overall low correct classification rate of 61%. A higher overall correct classification rate of 85% was obtained when the animal isolates were pooled together and compared to the raw sewage isolates. Bootstrap aggregation or adaptive resampling and combining of the FAME profile data increased correct classification rates substantially. Other MST methods may be better suited to differentiate between different fecal sources but classification tree analysis has enabled us to distinguish raw sewage from animal E. coli isolates, which previously had not been possible with other multivariate methods such as principal component analysis and cluster analysis.

  6. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  7. A Novel Energy-Efficient Approach for Human Activity Recognition

    PubMed Central

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin

    2017-01-01

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560

  8. Tissue classification and segmentation of pressure injuries using convolutional neural networks.

    PubMed

    Zahia, Sofia; Sierra-Sosa, Daniel; Garcia-Zapirain, Begonya; Elmaghraby, Adel

    2018-06-01

    This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Our system has been proven to make recognition of complicated structures in biomedical images feasible. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Optimal Methods for Classification of Digitally Modulated Signals

    DTIC Science & Technology

    2013-03-01

    of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test

  10. Landsat TM Classifications For SAFIS Using FIA Field Plots

    Treesearch

    William H. Cooke; Andrew J. Hartsell

    2001-01-01

    Wall-to-wall Landsat Thematic Mapper (TM) classification efforts in Georgia require field validation. We developed a new crown modeling procedure based on Forest Health Monitoring (FHM) data to test Forest Inventory and Analysis (FIA) data. These models simulate the proportion of tree crowns that reflect light on a FIA subplot basis. We averaged subplot crown...

  11. Classification Agreement Analysis of Cross-Battery Assessment in the Identification of Specific Learning Disorders in Children and Youth

    ERIC Educational Resources Information Center

    Kranzler, John H.; Floyd, Randy G.; Benson, Nicholas; Zaboski, Brian; Thibodaux, Lia

    2016-01-01

    The Cross-Battery Assessment (XBA) approach to identifying a specific learning disorder (SLD) is based on the postulate that deficits in cognitive abilities in the presence of otherwise average general intelligence are causally related to academic achievement weaknesses. To examine this postulate, we conducted a classification agreement analysis…

  12. Hyperspectral microscopic analysis of normal, benign and carcinoma microarray tissue sections

    NASA Astrophysics Data System (ADS)

    Maggioni, Mauro; Davis, Gustave L.; Warner, Frederick J.; Geshwind, Frank B.; Coppi, Andreas C.; DeVerse, Richard A.; Coifman, Ronald R.

    2006-02-01

    We apply a unique micro-optoelectromechanical tuned light source and new algorithms to the hyper-spectral microscopic analysis of human colon biopsies. The tuned light prototype (Plain Sight Systems Inc.) transmits any combination of light frequencies, range 440nm 700nm, trans-illuminating H and E stained tissue sections of normal (N), benign adenoma (B) and malignant carcinoma (M) colon biopsies, through a Nikon Biophot microscope. Hyper-spectral photomicrographs, randomly collected 400X magnication, are obtained with a CCD camera (Sensovation) from 59 different patient biopsies (20 N, 19 B, 20 M) mounted as a microarray on a single glass slide. The spectra of each pixel are normalized and analyzed to discriminate among tissue features: gland nuclei, gland cytoplasm and lamina propria/lumens. Spectral features permit the automatic extraction of 3298 nuclei with classification as N, B or M. When nuclei are extracted from each of the 59 biopsies the average classification among N, B and M nuclei is 97.1%; classification of the biopsies, based on the average nuclei classification, is 100%. However, when the nuclei are extracted from a subset of biopsies, and the prediction is made on nuclei in the remaining biopsies, there is a marked decrement in performance to 60% across the 3 classes. Similarly the biopsy classification drops to 54%. In spite of these classification differences, which we believe are due to instrument and biopsy normalization issues, hyper-spectral analysis has the potential to achieve diagnostic efficiency needed for objective microscopic diagnosis.

  13. Development and content validity testing of a comprehensive classification of diagnoses for pediatric nurse practitioners.

    PubMed

    Burns, C

    1991-01-01

    Pediatric nurse practitioners (PNPs) need an integrated, comprehensive classification that includes nursing, disease, and developmental diagnoses to effectively describe their practice. No such classification exists. Further, methodologic studies to help evaluate the content validity of any nursing taxonomy are unavailable. A conceptual framework was derived. Then 178 diagnoses from the North American Nursing Diagnosis Association (NANDA) 1986 list, selected diagnoses from the International Classification of Diseases, the Diagnostic and Statistical Manual, Third Revision, and others were selected. This framework identified and listed, with definitions, three domains of diagnoses: Developmental Problems, Diseases, and Daily Living Problems. The diagnoses were ranked using a 4-point scale (4 = highly related to 1 = not related) and were placed into the three domains. The rating scale was assigned by a panel of eight expert pediatric nurses. Diagnoses that were assigned to the Daily Living Problems domain were then sorted into the 11 Functional Health patterns described by Gordon (1987). Reliability was measured using proportions of agreement and Kappas. Content validity of the groups created was measured using indices of content validity and average congruency percentages. The experts used a new method to sort the diagnoses in a new way that decreased overlaps among the domains. The Developmental and Disease domains were judged reliable and valid. The Daily Living domain of nursing diagnoses showed marginally acceptable validity with acceptable reliability. Six Functional Health Patterns were judged reliable and valid, mixed results were determined for four categories, and the Coping/Stress Tolerance category was judged reliable but not valid using either test. There were considerable differences between the panel's, Gordon's (1987), and NANDA's clustering of NANDA diagnoses. This study defines the diagnostic practice of nurses from a holistic, patient-centered perspective. It is the first study to use quantitative methods to test a diagnostic classification system for nursing. The classification model could also be adapted for other nurse specialties.

  14. Reverse correlating love: highly passionate women idealize their partner's facial appearance.

    PubMed

    Gunaydin, Gul; DeLong, Jordan E

    2015-01-01

    A defining feature of passionate love is idealization--evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner's facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships.

  15. Ranking of predictor variables based on effect size criterion provides an accurate means of automatically classifying opinion column articles

    NASA Astrophysics Data System (ADS)

    Legara, Erika Fille; Monterola, Christopher; Abundo, Cheryl

    2011-01-01

    We demonstrate an accurate procedure based on linear discriminant analysis that allows automatic authorship classification of opinion column articles. First, we extract the following stylometric features of 157 column articles from four authors: statistics on high frequency words, number of words per sentence, and number of sentences per paragraph. Then, by systematically ranking these features based on an effect size criterion, we show that we can achieve an average classification accuracy of 93% for the test set. In comparison, frequency size based ranking has an average accuracy of 80%. The highest possible average classification accuracy of our data merely relying on chance is ∼31%. By carrying out sensitivity analysis, we show that the effect size criterion is superior than frequency ranking because there exist low frequency words that significantly contribute to successful author discrimination. Consistent results are seen when the procedure is applied in classifying the undisputed Federalist papers of Alexander Hamilton and James Madison. To the best of our knowledge, the work is the first attempt in classifying opinion column articles, that by virtue of being shorter in length (as compared to novels or short stories), are more prone to over-fitting issues. The near perfect classification for the longer papers supports this claim. Our results provide an important insight on authorship attribution that has been overlooked in previous studies: that ranking discriminant variables based on word frequency counts is not necessarily an optimal procedure.

  16. Reverse Correlating Love: Highly Passionate Women Idealize Their Partner’s Facial Appearance

    PubMed Central

    Gunaydin, Gul; DeLong, Jordan E.

    2015-01-01

    A defining feature of passionate love is idealization—evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner’s facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships. PMID:25806540

  17. Staging of chronic myeloid leukemia in the imatinib era: an evaluation of the World Health Organization proposal.

    PubMed

    Cortes, Jorge E; Talpaz, Moshe; O'Brien, Susan; Faderl, Stefan; Garcia-Manero, Guillermo; Ferrajoli, Alessandra; Verstovsek, Srdan; Rios, Mary B; Shan, Jenny; Kantarjian, Hagop M

    2006-03-15

    Several staging classification systems, all of which were designed in the preimatinib era, are used for chronic myeloid leukemia (CML). The World Health Organization (WHO) recently proposed a new classification system that has not been validated clinically. The authors investigated the significance of the WHO classification system and compared it with the classification systems used to date in imatinib trials ("standard definition") to determine its impact in establishing the outcome of patients after therapy with imatinib. In total, 809 patients who received imatinib for CML were classified into chronic phase (CP), accelerated phase (AP), and blast phase (BP) based on standard definitions and then were reclassified according to the new WHO classification system. Their outcomes with imatinib therapy were compared, and the value of individual components of these classification systems was determined. With the WHO classification, 78 patients (10%) were reclassified: 45 patients (6%) were reclassified from CP to AP, 14 patients (2%) were reclassified from AP to CP, and 19 patients (2%) were reclassified from AP to BP. The rates of complete cytogenetic response for patients in CP, AP, and BP according to the standard definition were 72%, 45%, and 8%, respectively. After these patients were reclassified according to WHO criteria, the response rates were 77% (P = 0.07), 39% (P = 0.28), and 11% (P = 0.61), respectively. The 3-year survival rates were 91%, 65%, and 10%, respectively, according to the standard classification and 95% (P = 0.05), 63% (P = 0.76), and 16% (P = 0.18), respectively, according to the WHO classification. Patients who had a blast percentage of 20-29%, which is considered CML-BP according to the WHO classification, had a significantly better response rate (21% vs. 8%; P = 0.11) and 3-year survival rate (42% vs. 10%; P = 0.0001) compared with patients who had blasts > or = 30%. Different classification systems had an impact on the outcome of patients, and some prognostic features had different prognostic implications in the imatinib era. The authors believe that a new, uniform staging system for CML is warranted, and they propose such a system. (c) 2006 American Cancer Society.

  18. An international survey of classification and treatment choices for group D retinoblastoma

    PubMed Central

    Scelfo, Christina; Francis, Jasmine H; Khetan, Vikas; Jenkins, Thomas; Marr, Brian; Abramson, David H; Shields, Carol L; Pe'er, Jacob; Munier, Francis; Berry, Jesse; Harbour, J. William; Yarovoy, Andrey; Lucena, Evandro; Murray, Timothy G; Bhagia, Pooja; Paysse, Evelyn; Tuncer, Samuray; Chantada, Guillermo L; Moll, Annette C; Ushakova, Tatiana; Plager, David A; Ziyovuddin, Islamov; Leal, Carlos A; Materin, Miguel A; Ji, Xun-Da; Cursino, Jose W; Polania, Rodrigo; Kiratli, Hayyam; All-Ericsson, Charlotta; Kebudi, Rejin; Honavar, Santosh G; Vishnevskia-Dai, Vicktoria; Epelman, Sidnel; Daniels, Anthony B; Ling, Jeanie D; Traore, Fousseyni; Ramirez-Ortiz, Marco A

    2017-01-01

    AIM To determine which IIRC scheme was used by retinoblastoma centers worldwide and the percentage of D eyes treated primarily with enucleation versus globe salvaging therapies as well as to correlate trends in treatment choice to IIRC version used and geographic region. METHODS An anonymized electronic survey was offered to 115 physicians at 39 retinoblastoma centers worldwide asking about IIRC classification schemes and treatment patterns used between 2008 and 2012. Participants were asked to record which version of the IIRC was used for classification, how many group D eyes were diagnosed, and how many eyes were treated with enucleation versus globe salvaging therapies. Averages of eyes per treatment modality were calculated and stratified by both IIRC version and geographic region. Statistical significance was determined by Chi-square, ANOVA and Kruskal-Wallis tests using Prism. RESULTS The survey was completed by 29% of physicians invited to participate. Totally 1807 D eyes were diagnosed. Regarding IIRC system, 27% of centers used the Children's Hospital of Los Angeles (CHLA) version, 33% used the Children's Oncology Group (COG) version, 23% used the Philadelphia version, and 17% were unsure. The rate for primary enucleation varied between 0 and 100% and the mean was 29%. By IIRC version, primary enucleation rates were: Philadelphia, 8%; COG, 34%; and CHLA, 37%. By geographic region, primary enucleation rates were: Latin America, 57%; Asia, 40%; Europe, 36%; Africa, 10%, US, 8%; and Middle East, 8%. However, systemic chemoreduction was used more often than enucleation in all regions except Latin America with a mean of 57% per center (P<0.0001). CONCLUSION Worldwide there is no consensus on which IIRC version is used, systemic chemoreduction was the most frequently used initial treatment during the study period followed by enucleation and primary treatment modality, especially enucleation, varied greatly with regards to IIRC version used and geographic region. PMID:28730089

  19. Slopeland utilizable limitation classification using landslide inventory

    NASA Astrophysics Data System (ADS)

    Tsai, Shu Fen; Lin, Chao Yuan

    2016-04-01

    In 1976, "Slopeland Conservation and Utilization Act" was promulgated as well as the criteria for slopeland utilization limitation classification (SULC) i.e., average slope, effective soil depth, degree of soil erosion, and parent rock became standardized. Due to the development areas on slope land steadily increased and the extreme rainfall events occurred frequently, the areas affected by landslides also increased year by year. According to the act, the land which damaged by disaster must be categorized to the conservation land and required rehabilitation. Nevertheless, the large-scale disaster on slope land and the limitation of SWCB officers are the constraint of field investigation. Therefore, how to establish the ongoing inspective procedure of post-disaster SULC using remote sensing was essential. A-Li-Shan, Ai-Liao, and Tai-Ma-Li Watershed were selected to be case studies in this project. The spatial data from big data i.e., Digital Elevation Model (DEM), soil map, and satellite images integrated with Geographic Information Systems (GIS) were applied to post-disaster SULC. The collapse and deposition area which delineated by vegetation recovery rate was established landslide inventory of cadastral unit combined with watershed unit. The results were verified with field survey and the accuracy was 97%. The landslide inventory could be an effective reference for sediment disaster investigation and a practical evidence for judgement to expropriation. Finally, the results showed that the ongoing inspective procedure of post-disaster SULC was practicable. From the four criteria, the average slope was the major factor. It was found that the non-uniform slopes, especially derived from cadastral units, often produce significant slope difference and lead to errors of average slope evaluation. Therefore, the Grid-based DEM slope derivation has been recommended as the standard method to calculate the average slope. Others criteria were previously required to classify the farm land tax. However, as a result of environmental change and advancements in farm machinery, it seems that those criteria were further inappropriate criteria for agricultural land. In conclusion, soil and water conservation works, which were enhanced to disaster prevention under climate change, should reconsider the SULC criteria. The average slope from DEM derivation and the sediment disaster from landslide inventory were suggested and adequate for SULC.

  20. Importance of ICD-10 coding directive change for acute gastroenteritis (unspecified) for rotavirus vaccine impact studies: illustration from a population-based cohort study from Ontario, Canada.

    PubMed

    Wilson, Sarah E; Deeks, Shelley L; Rosella, Laura C

    2015-09-15

    In Ontario, Canada, we conducted an evaluation of rotavirus (RV) vaccine on hospitalizations and Emergency Department (ED) visitations for acute gastroenteritis (AGE). In our original analysis, any one of the International Classification of Disease, Version 10 (ICD-10) codes was used for outcome ascertainment: RV-specific- (A08.0), viral- (A08.3, A08. 4, A08.5), and unspecified infectious- gastroenteritis (A09). Annual age-specific rates per 10,000 population were calculated. The average monthly rate of AGE hospitalization for children under age two increased from 0.82 per 10,000 from January 2003 to March 2009, to 2.35 over the period of April 2009 to March 31, 2013. Similar trends were found for ED consultations and in other age groups. A rise in events corresponding to the A09 code was found when the outcome definition was disaggregated by ICD-10 code. Documentation obtained from the World Health Organization confirmed that a change in directive for the classification of unspecified gastroenteritis occurred with the release of ICD-10 in April 2009. AGE events previously classified under the code K52.9, are now classified under code A09.9. Based on change in the classification of unspecified gastroenteritis we modified our outcome definition to also include unspecified non-infectious-gastroenteritis (K52.9). We recommend other investigators consider using both A09.9 and K52.9 ICD-10 codes for outcome ascertainment in future rotavirus vaccine impact studies to ensure that all unspecified cases of AGE are captured, especially if the study period spans 2009.

  1. Exploring the Impact of Target Eccentricity and Task Difficulty on Covert Visual Spatial Attention and Its Implications for Brain Computer Interfacing

    PubMed Central

    Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan

    2013-01-01

    Objective Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. Approach We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Main Results Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Significance Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. PMID:24312477

  2. Exploring the impact of target eccentricity and task difficulty on covert visual spatial attention and its implications for brain computer interfacing.

    PubMed

    Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan

    2013-01-01

    Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research.

  3. Environmental Gradient Analysis, Ordination, and Classification in Environmental Impact Assessments.

    DTIC Science & Technology

    1987-09-01

    agglomerative clustering algorithms for mainframe computers: (1) the unweighted pair-group method that V uses arithmetic averages ( UPGMA ), (2) the...hierarchical agglomerative unweighted pair-group method using arithmetic averages ( UPGMA ), which is also called average linkage clustering. This method was...dendrograms produced by weighted clustering (93). Sneath and Sokal (94), Romesburg (84), and Seber• (90) also strongly recommend the UPGMA . A dendrogram

  4. Structural brain changes versus self-report: machine-learning classification of chronic fatigue syndrome patients.

    PubMed

    Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland

    2018-05-30

    Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.

  5. Organ transplant AN-DRGs: modifying the exceptions hierarchy in casemix classification.

    PubMed

    Antioch, K; Zhang, X

    2000-01-01

    The study described in this article sought to develop AN-DRG Version 3 classification revisions for organ transplantation through statistical analyses of recommendations formulated by the Australian Casemix Clinical Committee. Two separate analyses of variance were undertaken for AN-DRG Version 2 and for the proposed Version 3 AN-DRGs, using average length of stay as the dependent variable. The committee made four key recommendations which were accepted and incorporated into AN-DRG Versions 3 and 3.1. This article focuses on the classification revisions for organ transplantation.

  6. LANDSAT applications to wetlands classification in the upper Mississippi River Valley. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Lillesand, T. M.; Werth, L. F. (Principal Investigator)

    1980-01-01

    A 25% improvement in average classification accuracy was realized by processing double-date vs. single-date data. Under the spectrally and spatially complex site conditions characterizing the geographical area used, further improvement in wetland classification accuracy is apparently precluded by the spectral and spatial resolution restrictions of the LANDSAT MSS. Full scene analysis of scanning densitometer data extracted from scale infrared photography failed to permit discrimination of many wetland and nonwetland cover types. When classification of photographic data was limited to wetland areas only, much more detailed and accurate classification could be made. The integration of conventional image interpretation (to simply delineate wetland boundaries) and machine assisted classification (to discriminate among cover types present within the wetland areas) appears to warrant further research to study the feasibility and cost of extending this methodology over a large area using LANDSAT and/or small scale photography.

  7. Site Classification using Multichannel Channel Analysis of Surface Wave (MASW) method on Soft and Hard Ground

    NASA Astrophysics Data System (ADS)

    Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.

    2018-04-01

    Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.

  8. Interobserver and intraobserver reliability of the modified Waldenström classification system for staging of Legg-Calvé-Perthes disease.

    PubMed

    Hyman, Joshua E; Trupia, Evan P; Wright, Margaret L; Matsumoto, Hiroko; Jo, Chan-Hee; Mulpuri, Kishore; Joseph, Benjamin; Kim, Harry K W

    2015-04-15

    The absence of a reliable classification system for Legg-Calvé-Perthes disease has contributed to difficulty in establishing consistent management strategies and in interpreting outcome studies. The purpose of this study was to assess interobserver and intraobserver reliability of the modified Waldenström classification system among a large and diverse group of pediatric orthopaedic surgeons. Twenty surgeons independently completed the first two rounds of staging: two assessments of forty deidentified radiographs of patients with Legg-Calvé-Perthes disease in various stages. Ten of the twenty surgeons completed another two rounds of staging after the addition of a second pair of radiographs in sequence. Kappa values were calculated within and between each of the rounds. Interobserver kappa values for the classification for surveys 1, 2, 3, and 4 were 0.81, 0.82, 0.76, and 0.80, respectively (with 0.61 to 0.80 considered substantial agreement and 0.81 to 1.0, nearly perfect agreement). Intraobserver agreement for the classification was an average of 0.88 (range, 0.77 to 0.96) between surveys 1 and 2 and an average of 0.87 (range, 0.81 to 0.94) between surveys 3 and 4. The modified Waldenström classification system for staging of Legg-Calvé-Perthes disease demonstrated substantial to almost perfect agreement between and within observers across multiple rounds of study. In doing so, the results of this study provide a foundation for future validation studies, in which the classification stage will be associated with clinical outcomes. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  9. High temporal resolution of extreme rainfall rate variability and the acoustic classification of rainfall

    NASA Astrophysics Data System (ADS)

    Nystuen, Jeffrey A.; Amitai, Eyal

    2003-04-01

    The underwater sound generated by raindrop splashes on a water surface is loud and unique allowing detection, classification and quantification of rainfall. One of the advantages of the acoustic measurement is that the listening area, an effective catchment area, is proportional to the depth of the hydrophone and can be orders of magnitude greater than other in situ rain gauges. This feature allows high temporal resolution of the rainfall measurement. A series of rain events with extremely high rainfall rates, over 100 mm/hr, is examined acoustically. Rapid onset and cessation of rainfall intensity are detected within the convective cells of these storms with maximum 5-s resolution values exceeding 1000 mm/hr. The probability distribution functions (pdf) for rainfall rate occurrence and water volume using the longer temporal resolutions typical of other instruments do not include these extreme values. The variance of sound intensity within different acoustic frequency bands can be used as an aid to classify rainfall type. Objective acoustic classification algorithms are proposed. Within each rainfall classification the relationship between sound intensity and rainfall rate is nearly linear. The reflectivity factor, Z, also has a linear relationship with rainfall rate, R, for each rainfall classification.

  10. Molecular classifications of breast carcinoma with similar terminology and different definitions: are they the same?

    PubMed

    Tang, Ping; Wang, Jianmin; Bourne, Patria

    2008-04-01

    There are 4 major molecular classifications in the literature that divide breast carcinoma into basal and nonbasal subtypes, with basal subtypes associated with poor prognosis. Basal subtype is defined as positive for cytokeratin (CK) 5/6, CK14, and/or CK17 in CK classification; negative for ER, PR, and HER2 in triple negative (TN) classification; negative for ER and negative or positive for HER2 in ER/HER2 classification; and positive for CK5/6, CK14, CK17, and/or EGFR; and negative for ER, PR, and HER2 in CK/TN classification. These classifications use similar terminology but different definitions; it is critical to understand the precise relationship between them. We compared these 4 classifications in 195 breast carcinomas and found that (1) the rates of basal subtypes varied from 5% to 36% for ductal carcinoma in situ and 14% to 40% for invasive ductal carcinoma. (2) The rates of basal subtypes varied from 19% to 76% for HG carcinoma and 1% to 7% for NHG carcinoma. (3) The rates of basal subtypes were strongly associated with tumor grades (P < .001) in all classifications and associated with tumor types (in situ versus invasive ductal carcinomas) in TN (P < .001) and CK/TN classifications (P = .035). (4) These classifications were related but not interchangeable (kappa ranges from 0.140 to 0.658 for HG carcinoma and from 0.098 to 0.654 for NHG carcinoma). In conclusion, although these classifications all divide breast carcinoma into basal and nonbasal subtypes, they are not interchangeable. More studies are needed to evaluate to their values in predicting prognosis and guiding individualized therapy.

  11. Modelling the Happiness Classification of Addicted, Addiction Risk, Threshold and Non-Addicted Groups on Internet Usage

    ERIC Educational Resources Information Center

    Sapmaz, Fatma; Totan, Tarik

    2018-01-01

    The aim of this study is to model the happiness classification of university students--grouped as addicted, addiction risk, threshold and non-addicted to internet usage--with compatibility analysis on a map as happiness, average and unhappiness. The participants in this study were 400 university students from Turkey. According to the results of…

  12. Wheat cultivation: Identifying and estimating area by means of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Mendonca, F. J.; Cottrell, D. A.; Tardin, A. T.; Lee, D. C. L.; Shimabukuro, Y. E.; Moreira, M. A.; Delima, A. M.; Maia, F. C. S.

    1981-01-01

    Automatic classification of LANDSAT data supported by aerial photography for identification and estimation of wheat growing areas was evaluated. Data covering three regions in the State of Rio Grande do Sul, Brazil were analyzed. The average correct classification of IMAGE-100 data was 51.02% and 63.30%, respectively, for the periods of July and of September/October, 1979.

  13. Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field

    PubMed Central

    Luo, Gang; Min, Wanli

    2007-01-01

    Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884

  14. Classification of Pelteobagrus fish in Poyang Lake based on mitochondrial COI gene sequence.

    PubMed

    Zhong, Bin; Chen, Ting-Ting; Gong, Rui-Yue; Zhao, Zhe-Xia; Wang, Binhua; Fang, Chunlin; Mao, Hui-Ling

    2016-11-01

    We use DNA molecular marker technology to correct the deficiency of traditional morphological taxonomy. Totality 770 Pelteobagrus fish from Poyang Lake were collected. After preliminary morphological classification, random selected eight samples in each species for DNA extraction. Mitochondrial COI gene sequence was cloned with universal primers and sequenced. The results showed that there are four species of Pelteobagrus living in Poyang Lake. The average of intraspecific genetic distance value was 0.003, while the average interspecific genetic distance was 0.128. The interspecific genetic distance is far more than intraspecific genetic distance. Besides, phylogenetic tree analysis revealed that molecular systematics was in accord with morphological classification. It indicated that COI gene is an effective DNA molecular marker in Pelteobagrus classification. Surprisingly, the intraspecific difference of some individuals (P. e6, P. n6, P. e5, and P. v4) from their original named exceeded species threshold (2%), which should be renewedly classified into Pelteobagrus fulvidraco. However, another individual P. v3 was very different, because its genetic distance was over 8.4% difference from original named Pelteobagrus vachelli. Its taxonomic status remained to be further studied.

  15. On the effect of subliminal priming on subjective perception of images: a machine learning approach.

    PubMed

    Kumar, Parmod; Mahmood, Faisal; Mohan, Dhanya Menoth; Wong, Ken; Agrawal, Abhishek; Elgendi, Mohamed; Shukla, Rohit; Dauwels, Justin; Chan, Alice H D

    2014-01-01

    The research presented in this article investigates the influence of subliminal prime words on peoples' judgment about images, through electroencephalograms (EEGs). In this cross domain priming paradigm, the participants are asked to rate how much they like the stimulus images, on a 7-point Likert scale, after being subliminally exposed to masked lexical prime words, with EEG recorded simultaneously. Statistical analysis tools are used to analyze the effect of priming on behavior, and machine learning techniques to infer the primes from EEGs. The experiment reveals strong effects of subliminal priming on the participants' explicit rating of images. The subjective judgment affected by the priming makes visible change in event-related potentials (ERPs); results show larger ERP amplitude for the negative primes compared with positive and neutral primes. In addition, Support Vector Machine (SVM) based classifiers are proposed to infer the prime types from the average ERPs, which yields a classification rate of 70%.

  16. Particle Swarm Optimization approach to defect detection in armour ceramics.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  17. The quality of life of Brazilian adolescents with asthma: associated clinical and sociodemographic factors.

    PubMed

    Amaral, Lígia Menezes do; Moratelli, Lucas; Palma, Pamella Valente; Leite, Isabel Cristina Gonçalves

    2014-08-01

    Asthma is the most common chronic disease among adolescents. This study assessed the quality of life (QOL) related to health in adolescents with asthma and its determining factors (demographic, socioeconomic, and clinical). We also separately evaluated each of the parameters that comprised the asthma control classification. This was an observational, cross-sectional study of 114 adolescents who had doctor-diagnosed asthma. QOL was assessed using a version of the Pediatric Asthma Quality of Life Questionnaire (PAQLQ) that was adapted and validated for Brazil, and higher scores indicated a better QOL. The level of asthma control was assessed using the rating system proposed by the Global Initiative for Asthma, and sociodemographic factors were evaluated. When the averages of the PAQLQ domains and overall scores were compared to the potentially explanatory variables, significantly lower average PAQLQ scores were obtained for individuals with an inadequate level of asthma control (p < 0.001). Of the control components, daytime symptoms, nighttime symptoms, and limited physical activity were related to QOL. However, the use of the β2 agonist and the peak flow functional parameter were not related to QOL. The level of asthma control was related to QOL, but this association manifested mainly in the subjective control domains, such as nighttime and daytime symptoms and physical activity limitations. The objective domain for control classification, represented by pulmonary function, was not an independent predictor or determinant of the QOL of adolescent asthma patients.

  18. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  19. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  20. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  1. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  2. Multilingual vocal emotion recognition and classification using back propagation neural network

    NASA Astrophysics Data System (ADS)

    Kayal, Apoorva J.; Nirmal, Jagannath

    2016-03-01

    This work implements classification of different emotions in different languages using Artificial Neural Networks (ANN). Mel Frequency Cepstral Coefficients (MFCC) and Short Term Energy (STE) have been considered for creation of feature set. An emotional speech corpus consisting of 30 acted utterances per emotion has been developed. The emotions portrayed in this work are Anger, Joy and Neutral in each of English, Marathi and Hindi languages. Different configurations of Artificial Neural Networks have been employed for classification purposes. The performance of the classifiers has been evaluated by False Negative Rate (FNR), False Positive Rate (FPR), True Positive Rate (TPR) and True Negative Rate (TNR).

  3. [Effect factors analysis of knee function recovery after distal femoral fracture operation].

    PubMed

    Bei, Chaoyong; Wang, Ruiying; Tang, Jicun; Li, Qiang

    2009-09-01

    To investigate the effect factors of knee function recovery after operation in distal femoral fractures. From January 2001 to May 2007, 92 cases of distal femoral fracture were treated. There were 50 males and 42 females, aged 20-77 years old (average 46.7 years old). Fracture was caused by traffic accident in 48 cases, by falling from height in 26 cases, by bruise in 12 cases and by tumble in 6 cases. According to Müller's Fracture classification, there were 29 cases of type A, 12 cases of type B and 51 cases of type C. According to American Society of Anesthesiologists (ASA) classification, there were 21 cases of grade I, 39 cases of grade II, 24 cases of grade III, and 8 cases of grade IV. The time from injury to operation was 4 hours to 24 days with an average of 7 days. Anatomical plate was used in 43 cases, retrograde interlocking intramedullary nail in 37 cases, and bone screws, bolts and internal fixation with Kirschner pins in 12 cases. After operation, the HSS knee function score was used to evaluate efficacy. Ten related factors were applied for statistical analysis, to knee function recovery after operation in distal femoral fractures, such as age, sex, preoperative ASA classification, injury to surgery time, fracture type, treatment, reduction quality, functional exercise after operation, whether or not CPM functional training and postoperative complications. Wound healed by first intention in 88 cases, infection occurred in 4 cases. All patients followed up 16-32 months with an average of 23.1 months. Clinical union of fracture was achieved within 3-7 months after operation. Extensor device adhesions and the scope of activities of <80 degrees occurred in 29 cases, traumatic arthritis in 25 cases, postoperative fracture displacement in 6 cases, mild knee varus or valgus in 7 cases and implant loosening in 6 cases. According to HSS knee function score, the results were excellent in 52 cases, good in 15 cases, fair in 10 cases and poor in 15 cases with an excellent and good rate of 72.83%. Single factor analysis showed that age, preoperative ASA classification, fracture type, reduction quality, whether or not CPM functional exercise, and postoperative complications were significantly in knee function recovery (P < 0.05). logistic regression analysis showed that the fracture type, quality of reduction, whether or not CPM functional exercise, and age were major factors in the knee joint function recovery. Age, preoperative ASA classification, fracture type, reduction quality, and whether or not CPM functional training, postoperative complications factors may affect the knee joint function recovery. Adjustment to the patient's preoperative physical status, fractures anatomic reduction and firm fixation, early postoperative active and passive functional exercises, less postoperative complications can maximize the restoration of knee joint function.

  4. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  5. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  6. Statistical analysis of texture in trunk images for biometric identification of tree species.

    PubMed

    Bressane, Adriano; Roveda, José A F; Martins, Antônio C G

    2015-04-01

    The identification of tree species is a key step for sustainable management plans of forest resources, as well as for several other applications that are based on such surveys. However, the present available techniques are dependent on the presence of tree structures, such as flowers, fruits, and leaves, limiting the identification process to certain periods of the year. Therefore, this article introduces a study on the application of statistical parameters for texture classification of tree trunk images. For that, 540 samples from five Brazilian native deciduous species were acquired and measures of entropy, uniformity, smoothness, asymmetry (third moment), mean, and standard deviation were obtained from the presented textures. Using a decision tree, a biometric species identification system was constructed and resulted to a 0.84 average precision rate for species classification with 0.83accuracy and 0.79 agreement. Thus, it can be considered that the use of texture presented in trunk images can represent an important advance in tree identification, since the limitations of the current techniques can be overcome.

  7. Classification of motor activities through derivative dynamic time warping applied on accelerometer data.

    PubMed

    Muscillo, Rossana; Conforto, Silvia; Schmid, Maurizio; Caselli, Paolo; D'Alessio, Tommaso

    2007-01-01

    In the context of tele-monitoring, great interest is presently devoted to physical activity, mainly of elderly or people with disabilities. In this context, many researchers studied the recognition of activities of daily living by using accelerometers. The present work proposes a novel algorithm for activity recognition that considers the variability in movement speed, by using dynamic programming. This objective is realized by means of a matching and recognition technique that determines the distance between the signal input and a set of previously defined templates. Two different approaches are here presented, one based on Dynamic Time Warping (DTW) and the other based on the Derivative Dynamic Time Warping (DDTW). The algorithm was applied to the recognition of gait, climbing and descending stairs, using a biaxial accelerometer placed on the shin. The results on DDTW, obtained by using only one sensor channel on the shin showed an average recognition score of 95%, higher than the values obtained with DTW (around 85%). Both DTW and DDTW consistently show higher classification rate than classical Linear Time Warping (LTW).

  8. The impact of ICD-9 revascularization procedure codes on estimates of racial disparities in ischemic stroke.

    PubMed

    Boan, Andrea D; Voeks, Jenifer H; Feng, Wuwei Wayne; Bachman, David L; Jauch, Edward C; Adams, Robert J; Ovbiagele, Bruce; Lackland, Daniel T

    2014-01-01

    The use of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9) diagnostic codes can identify racial disparities in ischemic stroke hospitalizations; however, inclusion of revascularization procedure codes as acute stroke events may affect the magnitude of the risk difference. This study assesses the impact of excluding revascularization procedure codes in the ICD-9 definition of ischemic stroke, compared with the traditional inclusive definition, on racial disparity estimates for stroke incidence and recurrence. Patients discharged with a diagnosis of ischemic stroke (ICD-9 codes 433.00-434.91 and 436) were identified from a statewide inpatient discharge database from 2010 to 2012. Race-age specific disparity estimates of stroke incidence and recurrence and 1-year cumulative recurrent stroke rates were compared between the routinely used traditional classification and a modified classification of stroke that excluded primary ICD-9 cerebral revascularization procedures codes (38.12, 00.61, and 00.63). The traditional classification identified 7878 stroke hospitalizations, whereas the modified classification resulted in 18% fewer hospitalizations (n = 6444). The age-specific black to white rate ratios were significantly higher in the modified than in the traditional classification for stroke incidence (rate ratio, 1.50; 95% confidence interval [CI], 1.43-1.58 vs. rate ratio, 1.24; 95% CI, 1.18-1.30, respectively). In whites, the 1-year cumulative recurrence rate was significantly reduced by 46% (45-64 years) and 49% (≥ 65 years) in the modified classification, largely explained by a higher rate of cerebral revascularization procedures among whites. There were nonsignificant reductions of 14% (45-64 years) and 19% (≥ 65 years) among blacks. Including cerebral revascularization procedure codes overestimates hospitalization rates for ischemic stroke and significantly underestimates the racial disparity estimates in stroke incidence and recurrence. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  9. Using Gaussian mixture models to detect and classify dolphin whistles and pulses.

    PubMed

    Peso Parada, Pablo; Cardenal-López, Antonio

    2014-06-01

    In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.

  10. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  11. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  12. Classification of CT brain images based on deep learning networks.

    PubMed

    Gao, Xiaohong W; Hui, Rui; Tian, Zengmin

    2017-01-01

    While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Improvement of Information Transfer Rates Using a Hybrid EEG-NIRS Brain-Computer Interface with a Short Trial Length: Offline and Pseudo-Online Analyses.

    PubMed

    Shin, Jaeyoung; Kim, Do-Won; Müller, Klaus-Robert; Hwang, Han-Jeong

    2018-06-05

    Electroencephalography (EEG) and near-infrared spectroscopy (NIRS) are non-invasive neuroimaging methods that record the electrical and metabolic activity of the brain, respectively. Hybrid EEG-NIRS brain-computer interfaces (hBCIs) that use complementary EEG and NIRS information to enhance BCI performance have recently emerged to overcome the limitations of existing unimodal BCIs, such as vulnerability to motion artifacts for EEG-BCI or low temporal resolution for NIRS-BCI. However, with respect to NIRS-BCI, in order to fully induce a task-related brain activation, a relatively long trial length (≥10 s) is selected owing to the inherent hemodynamic delay that lowers the information transfer rate (ITR; bits/min). To alleviate the ITR degradation, we propose a more practical hBCI operated by intuitive mental tasks, such as mental arithmetic (MA) and word chain (WC) tasks, performed within a short trial length (5 s). In addition, the suitability of the WC as a BCI task was assessed, which has so far rarely been used in the BCI field. In this experiment, EEG and NIRS data were simultaneously recorded while participants performed MA and WC tasks without preliminary training and remained relaxed (baseline; BL). Each task was performed for 5 s, which was a shorter time than previous hBCI studies. Subsequently, a classification was performed to discriminate MA-related or WC-related brain activations from BL-related activations. By using hBCI in the offline/pseudo-online analyses, average classification accuracies of 90.0 ± 7.1/85.5 ± 8.1% and 85.8 ± 8.6/79.5 ± 13.4% for MA vs. BL and WC vs. BL, respectively, were achieved. These were significantly higher than those of the unimodal EEG- or NIRS-BCI in most cases. Given the short trial length and improved classification accuracy, the average ITRs were improved by more than 96.6% for MA vs. BL and 87.1% for WC vs. BL, respectively, compared to those reported in previous studies. The suitability of implementing a more practical hBCI based on intuitive mental tasks without preliminary training and with a shorter trial length was validated when compared to previous studies.

  14. The Validity of Two Neuromotor Assessments for Predicting Motor Performance at 12 Months in Preterm Infants.

    PubMed

    Song, You Hong; Chang, Hyun Jung; Shin, Yong Beom; Park, Young Sook; Park, Yun Hee; Cho, Eun Sol

    2018-04-01

    To evaluate the validity of the Test of Infant Motor Performance (TIMP) and general movements (GMs) assessment for predicting Alberta Infant Motor Scale (AIMS) score at 12 months in preterm infants. A total of 44 preterm infants who underwent the GMs and TIMP at 1 month and 3 months of corrected age (CA) and whose motor performance was evaluated using AIMS at 12 months CA were included. GMs were judged as abnormal on basis of poor repertoire or cramped-synchronized movements at 1 month CA and abnormal or absent fidgety movement at 3 months CA. TIMP and AIMS scores were categorized as normal (average and low average and >5th percentile, respectively) or abnormal (below average and far below average or <5th percentile, respectively). Correlations between GMs and TIMP scores at 1 month and 3 months CA and the AIMS classification at 12 months CA were examined. The TIMP score at 3 months CA and GMs at 1 month and 3 months CA were significantly correlated with the motor performance at 12 months CA. However, the TIMP score at 1 month CA did not correlate with the AIMS classification at 12 months CA. For infants with normal GMs at 3 months CA, the TIMP score at 3 months CA correlated significantly with the AIMS classification at 12 months CA. Our findings suggest that neuromotor assessment using GMs and TIMP could be useful to identify preterm infants who are likely to benefit from intervention.

  15. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    PubMed

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  16. [Operative treatment for complex tibial plateau fractures].

    PubMed

    Song, Qi-Zhi; Li, Tao

    2012-03-01

    To explore the surgical methods and clinical evaluation of complex tibial plateau fractures resulted from high-energy injuries. From March 2006 to May 2009,48 cases with complex tibial plateau fractures were treated with open reduction and plate fixation, including 37 males and 11 females, with an average age of 37 years (ranged from 18 to 63 years). According to Schatzker classification, 16 cases were type IV, 20 cases type V and 12 cases type VI. All patients were examined by X-ray flim and CT scan. The function of knee joint were evaluated according to postoperative follow-up X-ray and Knee Merchant Rating. Forty-eight patients were followed up with a mean time of 14 months. According to Knee Merchant Rating, 24 cases got excellent results, 16 cases good, 6 cases fair and 2 cases poor. Appropriate operation time, anatomical reduction, suitable bone graft and reasonable rehabilitation exercises can maximally recovery the function of knee joint.

  17. 42 CFR 412.60 - DRG classification and weighting factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false DRG classification and weighting factors. 412.60... Determining Prospective Payment Federal Rates for Inpatient Operating Costs § 412.60 DRG classification and weighting factors. (a) Diagnosis-related groups. CMS establishs a classification of inpatient hospital...

  18. 42 CFR 412.60 - DRG classification and weighting factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false DRG classification and weighting factors. 412.60... Determining Prospective Payment Federal Rates for Inpatient Operating Costs § 412.60 DRG classification and weighting factors. (a) Diagnosis-related groups. CMS establishs a classification of inpatient hospital...

  19. On-line analysis of algae in water by discrete three-dimensional fluorescence spectroscopy.

    PubMed

    Zhao, Nanjing; Zhang, Xiaoling; Yin, Gaofang; Yang, Ruifang; Hu, Li; Chen, Shuang; Liu, Jianguo; Liu, Wenqing

    2018-03-19

    In view of the problem of the on-line measurement of algae classification, a method of algae classification and concentration determination based on the discrete three-dimensional fluorescence spectra was studied in this work. The discrete three-dimensional fluorescence spectra of twelve common species of algae belonging to five categories were analyzed, the discrete three-dimensional standard spectra of five categories were built, and the recognition, classification and concentration prediction of algae categories were realized by the discrete three-dimensional fluorescence spectra coupled with non-negative weighted least squares linear regression analysis. The results show that similarities between discrete three-dimensional standard spectra of different categories were reduced and the accuracies of recognition, classification and concentration prediction of the algae categories were significantly improved. By comparing with that of the chlorophyll a fluorescence excitation spectra method, the recognition accuracy rate in pure samples by discrete three-dimensional fluorescence spectra is improved 1.38%, and the recovery rate and classification accuracy in pure diatom samples 34.1% and 46.8%, respectively; the recognition accuracy rate of mixed samples by discrete-three dimensional fluorescence spectra is enhanced by 26.1%, the recovery rate of mixed samples with Chlorophyta 37.8%, and the classification accuracy of mixed samples with diatoms 54.6%.

  20. Classification of Birds and Bats Using Flight Tracks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.

    Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less

  1. Covert photo classification by fusing image features and visual attributes.

    PubMed

    Lang, Haitao; Ling, Haibin

    2015-10-01

    In this paper, we study a novel problem of classifying covert photos, whose acquisition processes are intentionally concealed from the subjects being photographed. Covert photos are often privacy invasive and, if distributed over Internet, can cause serious consequences. Automatic identification of such photos, therefore, serves as an important initial step toward further privacy protection operations. The problem is, however, very challenging due to the large semantic similarity between covert and noncovert photos, the enormous diversity in the photographing process and environment of cover photos, and the difficulty to collect an effective data set for the study. Attacking these challenges, we make three consecutive contributions. First, we collect a large data set containing 2500 covert photos, each of them is verified rigorously and carefully. Second, we conduct a user study on how humans distinguish covert photos from noncovert ones. The user study not only provides an important evaluation baseline, but also suggests fusing heterogeneous information for an automatic solution. Our third contribution is a covert photo classification algorithm that fuses various image features and visual attributes in the multiple kernel learning framework. We evaluate the proposed approach on the collected data set in comparison with other modern image classifiers. The results show that our approach achieves an average classification rate (1-EER) of 0.8940, which significantly outperforms other competitors as well as human's performance.

  2. A user-friendly SSVEP-based brain-computer interface using a time-domain classifier.

    PubMed

    Luo, An; Sullivan, Thomas J

    2010-04-01

    We introduce a user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. Single-channel EEG is recorded using a low-noise dry electrode. Compared to traditional gel-based multi-sensor EEG systems, a dry sensor proves to be more convenient, comfortable and cost effective. A hardware system was built that displays four LED light panels flashing at different frequencies and synchronizes with EEG acquisition. The visual stimuli have been carefully designed such that potential risk to photosensitive people is minimized. We describe a novel stimulus-locked inter-trace correlation (SLIC) method for SSVEP classification using EEG time-locked to stimulus onsets. We studied how the performance of the algorithm is affected by different selection of parameters. Using the SLIC method, the average light detection rate is 75.8% with very low error rates (an 8.4% false positive rate and a 1.3% misclassification rate). Compared to a traditional frequency-domain-based method, the SLIC method is more robust (resulting in less annoyance to the users) and is also suitable for irregular stimulus patterns.

  3. Intelligent detection and identification in fiber-optical perimeter intrusion monitoring system based on the FBG sensor network

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Li, Hanyu; Xie, Xin

    2015-12-01

    A real-time intelligent fiber-optic perimeter intrusion detection system (PIDS) based on the fiber Bragg grating (FBG) sensor network is presented in this paper. To distinguish the effects of different intrusion events, a novel real-time behavior impact classification method is proposed based on the essential statistical characteristics of signal's profile in the time domain. The features are extracted by the principal component analysis (PCA), which are then used to identify the event with a K-nearest neighbor classifier. Simulation and field tests are both carried out to validate its effectiveness. The average identification rate (IR) for five sample signals in the simulation test is as high as 96.67%, and the recognition rate for eight typical signals in the field test can also be achieved up to 96.52%, which includes both the fence-mounted and the ground-buried sensing signals. Besides, critically high detection rate (DR) and low false alarm rate (FAR) can be simultaneously obtained based on the autocorrelation characteristics analysis and a hierarchical detection and identification flow.

  4. Levels of Evidence in Cosmetic Surgery: Analysis and Recommendations Using a New CLEAR Classification

    PubMed Central

    2013-01-01

    Background: The Level of Evidence rating was introduced in 2011 to grade the quality of publications. This system evaluates study design but does not assess several other quality indicators. This study introduces a new “Cosmetic Level of Evidence And Recommendation” (CLEAR) classification that includes additional methodological criteria and compares this new classification with the existing system. Methods: All rated publications in the Cosmetic Section of Plastic and Reconstructive Surgery, July 2011 through June 2013, were evaluated. The published Level of Evidence rating (1–5) and criteria relevant to study design and methodology for each study were tabulated. A new CLEAR rating was assigned to each article, including a recommendation grade (A–D). The published Level of Evidence rating (1–5) was compared with the recommendation grade determined using the CLEAR classification. Results: Among the 87 cosmetic articles, 48 studies (55%) were designated as level 4. Three articles were assigned a level 1, but they contained deficiencies sufficient to undermine the conclusions. The correlation between the published Level of Evidence classification (1–5) and CLEAR Grade (A–D) was weak (ρ = 0.11, not significant). Only 41 studies (48%) evaluated consecutive patients or consecutive patients meeting inclusion criteria. Conclusions: The CLEAR classification considers methodological factors in evaluating study reliability. A prospective study among consecutive patients meeting eligibility criteria, with a reported inclusion rate, the use of contemporaneous controls when indicated, and consideration of confounders is a realistic goal. Such measures are likely to improve study quality. PMID:25289261

  5. Radon safety in terms of energy efficiency classification of buildings

    NASA Astrophysics Data System (ADS)

    Vasilyev, A.; Yarmoshenko, I.; Zhukovsky, M.

    2017-06-01

    According to the results of survey in Ekaterinburg, Russia, indoor radon concentrations above city average level have been found in each of the studied buildings with high energy efficiency class. Measures to increase energy efficiency were confirmed to decrease the air exchange rate and accumulation of high radon concentrations indoors. Despite of recommendations to use mechanical ventilation with heat recovery as the main scenario for reducing elevated radon concentrations in energy-efficient buildings, the use of such systems did not show an obvious advantage. In real situation, mechanical ventilation system is not used properly both in the automatic and manual mode, which does not give an obvious advantage over natural ventilation in the climate of the Middle Urals in Ekaterinburg. Significant number of buildings with a high class of energy efficiency and built using modern space-planning decisions contributes to an increase in the average radon concentration. Such situation contradicts to “as low as reasonable achievable” principle of the radiation protection.

  6. A Classification Method for Seed Viability Assessment with Infrared Thermography.

    PubMed

    Men, Sen; Yan, Lei; Liu, Jiaxin; Qian, Hua; Luo, Qinjuan

    2017-04-12

    This paper presents a viability assessment method for Pisum sativum L. seeds based on the infrared thermography technique. In this work, different artificial treatments were conducted to prepare seeds samples with different viability. Thermal images and visible images were recorded every five minutes during the standard five day germination test. After the test, the root length of each sample was measured, which can be used as the viability index of that seed. Each individual seed area in the visible images was segmented with an edge detection method, and the average temperature of the corresponding area in the infrared images was calculated as the representative temperature for this seed at that time. The temperature curve of each seed during germination was plotted. Thirteen characteristic parameters extracted from the temperature curve were analyzed to show the difference of the temperature fluctuations between the seeds samples with different viability. With above parameters, support vector machine (SVM) was used to classify the seed samples into three categories: viable, aged and dead according to the root length, the classification accuracy rate was 95%. On this basis, with the temperature data of only the first three hours during the germination, another SVM model was proposed to classify the seed samples, and the accuracy rate was about 91.67%. From these experimental results, it can be seen that infrared thermography can be applied for the prediction of seed viability, based on the SVM algorithm.

  7. Frog sound identification using extended k-nearest neighbor classifier

    NASA Astrophysics Data System (ADS)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  8. Evaluation of a New Software Version of the RTVue Optical Coherence Tomograph for Image Segmentation and Detection of Glaucoma in High Myopia.

    PubMed

    Holló, Gábor; Shu-Wei, Hsu; Naghizadeh, Farzaneh

    2016-06-01

    To compare the current (6.3) and a novel software version (6.12) of the RTVue-100 optical coherence tomograph (RTVue-OCT) for ganglion cell complex (GCC) and retinal nerve fiber layer thickness (RNFLT) image segmentation and detection of glaucoma in high myopia. RNFLT and GCC scans were acquired with software version 6.3 of the RTVue-OCT on 51 highly myopic eyes (spherical refractive error ≤-6.0 D) of 51 patients, and were analyzed with both the software versions. Twenty-two eyes were nonglaucomatous, 13 were ocular hypertensive and 16 eyes had glaucoma. No difference was seen for any RNFLT, and average GCC parameter between the software versions (paired t test, P≥0.084). Global loss volume was significantly lower (more normal) with version 6.12 than with version 6.3 (Wilcoxon signed-rank test, P<0.001). The percentage agreement (κ) between the clinical (normal and ocular hypertensive vs. glaucoma) and the software-provided classifications (normal and borderline vs. outside normal limits) were 0.3219 and 0.4442 for average RNFLT, and 0.2926 and 0.4977 for average GCC with versions 1 and 2, respectively (McNemar symmetry test, P≥0.289). No difference in average RNFLT and GCC classification (McNemar symmetry test, P≥0.727) and the number of eyes with at least 1 segmentation error (P≥0.109) was found between the software versions, respectively. Although GCC segmentation was improved with software version 6.12 compared with the current version in highly myopic eyes, this did not result in a significant change of the average RNFLT and GCC values, and did not significantly improve the software-provided classification for glaucoma.

  9. Spatial Characteristics of Geothermal Spring Temperatures and Discharge Rates in the Tatun Volcanic Area, Taiwan

    NASA Astrophysics Data System (ADS)

    Jang, C. S.; Liu, C. W.

    2014-12-01

    The Tatun volcanic area is the only potential volcanic geothermal region in the Taiwan island, and abundant in hot spring resources owing to stream water mixing with fumarolic gases. According to the Meinzer's classification, spring temperatures and discharge rates are the most important properties for characterizing spring classifications. This study attempted to spatially characterize spring temperatures and discharge rates in the Tatun volcanic area, Taiwanusing indicator kriging (IK). First, data on spring temperatures and discharge rates, which were collected from surveyed data of the Taipei City Government, were divided into high, moderate and low categories according to spring classification criteria, and the various categories were regarded as estimation thresholds. Then, IK was adopted to model occurrence probabilities of specified temperatures and discharge rates in springs, and to determine their classifications based on estimated probabilities. Finally, nine combinations were obtained from the classifications of temperatures and discharge rates in springs. Moreover, the combinations and features of spring water were spatially quantified according to seven sub-zones of spring utilization. A suitable and sustainable development strategy of the spring area was proposed in each sub-zone based on probability-based combinations and features of spring water.The research results reveal that the probability-based classifications using IK provide an excellent insight in exploring the uncertainty of spatial features in springs, and can provide Taiwanese government administrators with detailed information on sustainable spring utilization and conservation in the overexploited spring tourism areas. The sub-zones BT (Beitou), RXY (Rd. Xingyi), ZSL (Zhongshanlou) and LSK (Lengshuikeng) with high or moderate discharge rates are suitable to supply spring water for tourism hotels.Local natural hot springs should be planned in the sub-zones DBT (Dingbeitou), ZSL, XYK (Xiayoukeng), and MC (Macao) with low discharge rates, and low or moderate temperatures, particularly in riverbeds or valleys.Keywords: Spring; Temperature; Discharge rate; Indicator kriging; Uncertainty

  10. Preoperative classification assessment reliability and influence on the length of intertrochanteric fracture operations.

    PubMed

    Shen, Jing; Hu, FangKe; Zhang, LiHai; Tang, PeiFu; Bi, ZhengGang

    2013-04-01

    The accuracy of intertrochanteric fracture classification is important; indeed, the patient outcomes are dependent on their classification. The aim of this study was to use the AO classification system to evaluate the variation in classification between X-ray and computed tomography (CT)/3D CT images. Then, differences in the length of surgery were evaluated based on two examinations. Intertrochanteric fractures were reviewed and surgeons were interviewed. The rates of correct discrimination and misclassification (overestimates and underestimates) probabilities were determined. The impact of misclassification on length of surgery was also evaluated. In total, 370 patents and four surgeons were included in the study. All patients had X-ray images and 210 patients had CT/3D CT images. Of them, 214 and 156 patients were treated by intramedullary and extramedullary fixation systems, respectively. The mean length of surgery was 62.1 ± 17.7 min. The overall rate of correct discrimination was 83.8 % and in the classification of A1, A2 and A3 were 80.0, 85.7 and 82.4 %, respectively. The rate of misclassification showed no significant difference between stable and unstable fractures (21.3 vs 13.1 %, P = 0.173). The overall rates of overestimates and underestimates were significantly different (5 vs 11.25 %, P = 0.041). Subtracting the rate of overestimates from underestimates had a positive correlation with prolonged surgery and showed a significant difference with intramedullary fixation (P < 0.001). Classification based on the AO system was good in terms of consistency. CT/3D CT examination was more reliable and more helpful for preoperative assessment, especially for performance of an intramedullary fixation.

  11. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography

    PubMed Central

    Scherer, Klaus R.; Schuller, Björn W.

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks—novelty, intrinsic pleasantness, goal conduciveness, control, and power—in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions. PMID:29293572

  12. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography.

    PubMed

    Coutinho, Eduardo; Gentsch, Kornelia; van Peer, Jacobien; Scherer, Klaus R; Schuller, Björn W

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks-novelty, intrinsic pleasantness, goal conduciveness, control, and power-in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions.

  13. Rate of caesarean sections according to the Robson classification: Analysis in a French perinatal network - Interest and limitations of the French medico-administrative data (PMSI).

    PubMed

    Lafitte, A-S; Dolley, P; Le Coutour, X; Benoist, G; Prime, L; Thibon, P; Dreyfus, M

    2018-02-01

    The objective of our study was to determine, in accordance with WHO recommendations, the rates of Caesarean sections in a French perinatal network according to the Robson classification and determine the benefit of the medico-administrative data (PMSI) to collect this indicator. This study aimed to identify the main groups contributing to local variations in the rates of Caesarean sections. A descriptive multicentric study was conducted in 13 maternity units of a French perinatal network. The rates of Caesarean sections and the contribution of each group of the Robson classification were calculated for all Caesarean sections performed in 2014. The agreement of the classification of Caesarean sections according to Robson using medico-administrative data and data collected in the patient records was measured by the Kappa index. We also analysed a 6 groups simplified Robson classification only using data from PMSI, which do not inform about parity and onset of labour. The rate of Caesarean sections was 19% (14.5-33.2) in 2014 (2924 out of 15413 deliveries). The most important contributors to the total rates were groups 1, 2 and 5, representing respectively 14.3%, 16.7% and 32.1% of the Caesarean sections. The rates were significantly different in level 1, 2b and 3 maternity units in groups 1 to 4, level 2a maternity units in group 5, and level 3 maternity units in groups 6 and 7. The agreement between the simplified Robson classification produced using the medical records and the medico-administrative data was excellent, with a Kappa index of 0.985 (0.980-0.990). To reduce the rates of Caesarean sections, audits should be conducted on groups 1, 2 and 5 and local protocols developed. Simply by collecting the parity data, the excellent metrological quality of the medico-administrative data would allow systematisation of the Robson classification for each hospital. Copyright © 2017. Published by Elsevier Masson SAS.

  14. Micromachined cascade virtual impactor with a flow rate distributor for wide range airborne particle classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Yong-Ho; Maeng, Jwa-Young; Park, Dongho

    2007-07-23

    This letter reports a module for airborne particle classification, which consists of a micromachined three-stage virtual impactor for classifying airborne particles according to their size and a flow rate distributor for supplying the required flow rate to the virtual impactor. Dioctyl sebacate particles, 100-600 nm in diameter, and carbon particles, 0.6-10 {mu}m in diameter, were used for particle classification. The collection efficiency and cutoff diameter were examined. The measured cutoff diameters of the first, second, and third stages were 135 nm, 1.9 {mu}m, and 4.8 {mu}m, respectively.

  15. Sample size, library composition, and genotypic diversity among natural populations of Escherichia coli from different animals influence accuracy of determining sources of fecal pollution.

    PubMed

    Johnson, LeeAnn K; Brown, Mary B; Carruthers, Ethan A; Ferguson, John A; Dombek, Priscilla E; Sadowsky, Michael J

    2004-08-01

    A horizontal, fluorophore-enhanced, repetitive extragenic palindromic-PCR (rep-PCR) DNA fingerprinting technique (HFERP) was developed and evaluated as a means to differentiate human from animal sources of Escherichia coli. Box A1R primers and PCR were used to generate 2,466 rep-PCR and 1,531 HFERP DNA fingerprints from E. coli strains isolated from fecal material from known human and 12 animal sources: dogs, cats, horses, deer, geese, ducks, chickens, turkeys, cows, pigs, goats, and sheep. HFERP DNA fingerprinting reduced within-gel grouping of DNA fingerprints and improved alignment of DNA fingerprints between gels, relative to that achieved using rep-PCR DNA fingerprinting. Jackknife analysis of the complete rep-PCR DNA fingerprint library, done using Pearson's product-moment correlation coefficient, indicated that animal and human isolates were assigned to the correct source groups with an 82.2% average rate of correct classification. However, when only unique isolates were examined, isolates from a single animal having a unique DNA fingerprint, Jackknife analysis showed that isolates were assigned to the correct source groups with a 60.5% average rate of correct classification. The percentages of correctly classified isolates were about 15 and 17% greater for rep-PCR and HFERP, respectively, when analyses were done using the curve-based Pearson's product-moment correlation coefficient, rather than the band-based Jaccard algorithm. Rarefaction analysis indicated that, despite the relatively large size of the known-source database, genetic diversity in E. coli was very great and is most likely accounting for our inability to correctly classify many environmental E. coli isolates. Our data indicate that removal of duplicate genotypes within DNA fingerprint libraries, increased database size, proper methods of statistical analysis, and correct alignment of band data within and between gels improve the accuracy of microbial source tracking methods.

  16. Reducing unnecessary lab testing in the ICU with artificial intelligence.

    PubMed

    Cismondi, F; Celi, L A; Fialho, A S; Vieira, S M; Reti, S R; Sousa, J M C; Finkelstein, S N

    2013-05-01

    To reduce unnecessary lab testing by predicting when a proposed future lab test is likely to contribute information gain and thereby influence clinical management in patients with gastrointestinal bleeding. Recent studies have demonstrated that frequent laboratory testing does not necessarily relate to better outcomes. Data preprocessing, feature selection, and classification were performed and an artificial intelligence tool, fuzzy modeling, was used to identify lab tests that do not contribute an information gain. There were 11 input variables in total. Ten of these were derived from bedside monitor trends heart rate, oxygen saturation, respiratory rate, temperature, blood pressure, and urine collections, as well as infusion products and transfusions. The final input variable was a previous value from one of the eight lab tests being predicted: calcium, PTT, hematocrit, fibrinogen, lactate, platelets, INR and hemoglobin. The outcome for each test was a binary framework defining whether a test result contributed information gain or not. Predictive modeling was applied to recognize unnecessary lab tests in a real world ICU database extract comprising 746 patients with gastrointestinal bleeding. Classification accuracy of necessary and unnecessary lab tests of greater than 80% was achieved for all eight lab tests. Sensitivity and specificity were satisfactory for all the outcomes. An average reduction of 50% of the lab tests was obtained. This is an improvement from previously reported similar studies with average performance 37% by [1-3]. Reducing frequent lab testing and the potential clinical and financial implications are an important issue in intensive care. In this work we present an artificial intelligence method to predict the benefit of proposed future laboratory tests. Using ICU data from 746 patients with gastrointestinal bleeding, and eleven measurements, we demonstrate high accuracy in predicting the likely information to be gained from proposed future lab testing for eight common GI related lab tests. Future work will explore applications of this approach to a range of underlying medical conditions and laboratory tests. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Reducing unnecessary lab testing in the ICU with artificial intelligence

    PubMed Central

    Cismondi, F.; Celi, L.A.; Fialho, A.S.; Vieira, S.M.; Reti, S.R.; Sousa, J.M.C.; Finkelstein, S.N.

    2017-01-01

    Objectives To reduce unnecessary lab testing by predicting when a proposed future lab test is likely to contribute information gain and thereby influence clinical management in patients with gastrointestinal bleeding. Recent studies have demonstrated that frequent laboratory testing does not necessarily relate to better outcomes. Design Data preprocessing, feature selection, and classification were performed and an artificial intelligence tool, fuzzy modeling, was used to identify lab tests that do not contribute an information gain. There were 11 input variables in total. Ten of these were derived from bedside monitor trends heart rate, oxygen saturation, respiratory rate, temperature, blood pressure, and urine collections, as well as infusion products and transfusions. The final input variable was a previous value from one of the eight lab tests being predicted: calcium, PTT, hematocrit, fibrinogen, lactate, platelets, INR and hemoglobin. The outcome for each test was a binary framework defining whether a test result contributed information gain or not. Patients Predictive modeling was applied to recognize unnecessary lab tests in a real world ICU database extract comprising 746 patients with gastrointestinal bleeding. Main results Classification accuracy of necessary and unnecessary lab tests of greater than 80% was achieved for all eight lab tests. Sensitivity and specificity were satisfactory for all the outcomes. An average reduction of 50% of the lab tests was obtained. This is an improvement from previously reported similar studies with average performance 37% by [1–3]. Conclusions Reducing frequent lab testing and the potential clinical and financial implications are an important issue in intensive care. In this work we present an artificial intelligence method to predict the benefit of proposed future laboratory tests. Using ICU data from 746 patients with gastrointestinal bleeding, and eleven measurements, we demonstrate high accuracy in predicting the likely information to be gained from proposed future lab testing for eight common GI related lab tests. Future work will explore applications of this approach to a range of underlying medical conditions and laboratory tests. PMID:23273628

  18. Validation of American Thyroid Association Ultrasound Risk Assessment of Thyroid Nodules Selected for Ultrasound Fine-Needle Aspiration.

    PubMed

    Tang, Alice L; Falciglia, Mercedes; Yang, Huaitao; Mark, Jonathan R; Steward, David L

    2017-08-01

    The aim of this study was to validate the American Thyroid Association (ATA) sonographic risk assessment of thyroid nodules. The ATA sonographic risk assessment was prospectively applied to 206 thyroid nodules selected for ultrasound-guided fine-needle aspiration (US-FNA), and analyzed with The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC), as well as surgical pathology for the subset undergoing surgical excision. The analysis included 206 thyroid nodules averaging 2.4 cm (range 1-7 cm; standard error of the mean 0.07). Using the ATA US pattern risk assessment, nodules were classified as high (4%), intermediate (31%), low (38%), and very low (26%) risk of malignancy. Nodule size was inversely correlated with sonographic risk assessment, as lower risk nodules were larger on average (p < 0.0001). Malignancy rates determined by cytology/surgical pathology were high 100%, intermediate 11%, low 8%, and very low 2%, which were closely aligned with ATA malignancy risk estimates (high 70-90%, intermediate 10-20%, low 5-10%, and very low 3%). ATA US pattern risk assessment also appropriately predicted the proportion of nodules classified as malignant or suspicious for malignancy through TBSRTC classification-high (77%), intermediate (6%), low (1%), and very low 0%-as well as benign TBSRTC classification-high (0%), intermediate (47%), low (61%), and very low (70%) (p < 0.0001). Malignancy rates of surgically excised, cytologically indeterminate nodules followed ATA sonographic risk stratification (high 100%, intermediate 21%, low 17%, and very low 12%; p = 0.003). This prospective study supports the new ATA sonographic pattern risk assessment for selection of thyroid nodules for US-FNA based upon TBSRTC and surgical pathology results. In the setting of indeterminate cytopathology, nodules categorized as atypia of undetermined significance/follicular lesion of undetermined significance with ATA high-risk sonographic patterns have a high likelihood of being malignant.

  19. Features analysis for identification of date and party hubs in protein interaction network of Saccharomyces Cerevisiae.

    PubMed

    Mirzarezaee, Mitra; Araabi, Babak N; Sadeghi, Mehdi

    2010-12-19

    It has been understood that biological networks have modular organizations which are the sources of their observed complexity. Analysis of networks and motifs has shown that two types of hubs, party hubs and date hubs, are responsible for this complexity. Party hubs are local coordinators because of their high co-expressions with their partners, whereas date hubs display low co-expressions and are assumed as global connectors. However there is no mutual agreement on these concepts in related literature with different studies reporting their results on different data sets. We investigated whether there is a relation between the biological features of Saccharomyces Cerevisiae's proteins and their roles as non-hubs, intermediately connected, party hubs, and date hubs. We propose a classifier that separates these four classes. We extracted different biological characteristics including amino acid sequences, domain contents, repeated domains, functional categories, biological processes, cellular compartments, disordered regions, and position specific scoring matrix from various sources. Several classifiers are examined and the best feature-sets based on average correct classification rate and correlation coefficients of the results are selected. We show that fusion of five feature-sets including domains, Position Specific Scoring Matrix-400, cellular compartments level one, and composition pairs with two and one gaps provide the best discrimination with an average correct classification rate of 77%. We study a variety of known biological feature-sets of the proteins and show that there is a relation between domains, Position Specific Scoring Matrix-400, cellular compartments level one, composition pairs with two and one gaps of Saccharomyces Cerevisiae's proteins, and their roles in the protein interaction network as non-hubs, intermediately connected, party hubs and date hubs. This study also confirms the possibility of predicting non-hubs, party hubs and date hubs based on their biological features with acceptable accuracy. If such a hypothesis is correct for other species as well, similar methods can be applied to predict the roles of proteins in those species.

  20. Activity recognition of assembly tasks using body-worn microphones and accelerometers.

    PubMed

    Ward, Jamie A; Lukowicz, Paul; Tröster, Gerhard; Starner, Thad E

    2006-10-01

    In order to provide relevant information to mobile users, such as workers engaging in the manual tasks of maintenance and assembly, a wearable computer requires information about the user's specific activities. This work focuses on the recognition of activities that are characterized by a hand motion and an accompanying sound. Suitable activities can be found in assembly and maintenance work. Here, we provide an initial exploration into the problem domain of continuous activity recognition using on-body sensing. We use a mock "wood workshop" assembly task to ground our investigation. We describe a method for the continuous recognition of activities (sawing, hammering, filing, drilling, grinding, sanding, opening a drawer, tightening a vise, and turning a screwdriver) using microphones and three-axis accelerometers mounted at two positions on the user's arms. Potentially "interesting" activities are segmented from continuous streams of data using an analysis of the sound intensity detected at the two different locations. Activity classification is then performed on these detected segments using linear discriminant analysis (LDA) on the sound channel and hidden Markov models (HMMs) on the acceleration data. Four different methods at classifier fusion are compared for improving these classifications. Using user-dependent training, we obtain continuous average recall and precision rates (for positive activities) of 78 percent and 74 percent, respectively. Using user-independent training (leave-one-out across five users), we obtain recall rates of 66 percent and precision rates of 63 percent. In isolation, these activities were recognized with accuracies of 98 percent, 87 percent, and 95 percent for the user-dependent, user-independent, and user-adapted cases, respectively.

  1. Activity classification using the GENEA: optimum sampling frequency and number of axes.

    PubMed

    Zhang, Shaoyan; Murray, Peter; Zillmer, Ruediger; Eston, Roger G; Catt, Michael; Rowlands, Alex V

    2012-11-01

    The GENEA shows high accuracy for classification of sedentary, household, walking, and running activities when sampling at 80 Hz on three axes. It is not known whether it is possible to decrease this sampling frequency and/or the number of axes without detriment to classification accuracy. The purpose of this study was to compare the classification rate of activities on the basis of data from a single axis, two axes, and three axes, with sampling rates ranging from 5 to 80 Hz. Sixty participants (age, 49.4 yr (6.5 yr); BMI, 24.6 kg·m (3.4 kg·m)) completed 10-12 semistructured activities in the laboratory and outdoor environment while wearing a GENEA accelerometer on the right wrist. We analyzed data from single axis, dual axes, and three axes at sampling rates of 5, 10, 20, 40, and 80 Hz. Mathematical models based on features extracted from mean, SD, fast Fourier transform, and wavelet decomposition were built, which combined one of the numbers of axes with one of the sampling rates to classify activities into sedentary, household, walking, and running. Classification accuracy was high irrespective of the number of axes for data collected at 80 Hz (96.93% ± 0.97%), 40 Hz (97.4% ± 0.73%), 20 Hz (96.86% ± 1.12%), and 10 Hz (97.01% ± 1.01%) but dropped for data collected at 5 Hz (94.98% ± 1.36%). Sampling frequencies >10 Hz and/or more than one axis of measurement were not associated with greater classification accuracy. Lower sampling rates and measurement of a single axis would result in a lower data load, longer battery life, and higher efficiency of data processing. Further research should investigate whether a lower sampling rate and a single axis affects classification accuracy when considering a wider range of activities.

  2. Development of the Average Likelihood Function for Code Division Multiple Access (CDMA) Using BPSK and QPSK Symbols

    DTIC Science & Technology

    2015-01-01

    This research has the purpose to establish a foundation for new classification and estimation of CDMA signals. Keywords: DS / CDMA signals, BPSK, QPSK...DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK AND QPSK SYMBOLS JANUARY 2015...To) OCT 2013 – OCT 2014 4. TITLE AND SUBTITLE DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK

  3. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  4. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots

    PubMed Central

    Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524

  5. Analysis and application of classification methods of complex carbonate reservoirs

    NASA Astrophysics Data System (ADS)

    Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei

    2018-06-01

    There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.

  6. Blood vessel classification into arteries and veins in retinal images

    NASA Astrophysics Data System (ADS)

    Kondermann, Claudia; Kondermann, Daniel; Yan, Michelle

    2007-03-01

    The prevalence of diabetes is expected to increase dramatically in coming years; already today it accounts for a major proportion of the health care budget in many countries. Diabetic Retinopathy (DR), a micro vascular complication very often seen in diabetes patients, is the most common cause of visual loss in working age population of developed countries today. Since the possibility of slowing or even stopping the progress of this disease depends on the early detection of DR, an automatic analysis of fundus images would be of great help to the ophthalmologist due to the small size of the symptoms and the large number of patients. An important symptom for DR are abnormally wide veins leading to an unusually low ratio of the average diameter of arteries to veins (AVR). There are also other diseases like high blood pressure or diseases of the pancreas with one symptom being an abnormal AVR value. To determine it, a classification of vessels as arteries or veins is indispensable. As to our knowledge despite the importance there have only been two approaches to vessel classification yet. Therefore we propose an improved method. We compare two feature extraction methods and two classification methods based on support vector machines and neural networks. Given a hand-segmentation of vessels our approach achieves 95.32% correctly classified vessel pixels. This value decreases by 10% on average, if the result of a segmentation algorithm is used as basis for the classification.

  7. Classifying visuomotor workload in a driving simulator using subject specific spatial brain patterns

    PubMed Central

    Dijksterhuis, Chris; de Waard, Dick; Brookhuis, Karel A.; Mulder, Ben L. J. M.; de Jong, Ritske

    2013-01-01

    A passive Brain Computer Interface (BCI) is a system that responds to the spontaneously produced brain activity of its user and could be used to develop interactive task support. A human-machine system that could benefit from brain-based task support is the driver-car interaction system. To investigate the feasibility of such a system to detect changes in visuomotor workload, 34 drivers were exposed to several levels of driving demand in a driving simulator. Driving demand was manipulated by varying driving speed and by asking the drivers to comply to individually set lane keeping performance targets. Differences in the individual driver's workload levels were classified by applying the Common Spatial Pattern (CSP) and Fisher's linear discriminant analysis to frequency filtered electroencephalogram (EEG) data during an off line classification study. Several frequency ranges, EEG cap configurations, and condition pairs were explored. It was found that classifications were most accurate when based on high frequencies, larger electrode sets, and the frontal electrodes. Depending on these factors, classification accuracies across participants reached about 95% on average. The association between high accuracies and high frequencies suggests that part of the underlying information did not originate directly from neuronal activity. Nonetheless, average classification accuracies up to 75–80% were obtained from the lower EEG ranges that are likely to reflect neuronal activity. For a system designer, this implies that a passive BCI system may use several frequency ranges for workload classifications. PMID:23970851

  8. The Role of Facial Attractiveness and Facial Masculinity/Femininity in Sex Classification of Faces

    PubMed Central

    Hoss, Rebecca A.; Ramsey, Jennifer L.; Griffin, Angela M.; Langlois, Judith H.

    2005-01-01

    We tested whether adults (Experiment 1) and 4–5-year-old children (Experiment 2) identify the sex of high attractive faces faster and more accurately than low attractive faces in a reaction time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults’ sex classification of both female and male faces and children’s sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independent of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman’s (1990) averageness theory of attractiveness. PMID:16457167

  9. Multimodal Signal Processing for Personnel Detection and Activity Classification for Indoor Surveillance

    DTIC Science & Technology

    2013-11-15

    features and designed a classifier that achieves up to 95% classification accuracy on classifying the occupancy with indoor footstep data. MDL-based...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other

  10. Practical UXO Classification: Enhanced Data Processing Strategies for Technology Transition - Fort Ord: Dynamic and Cued Metalmapper Processing and Classification

    DTIC Science & Technology

    2017-06-06

    OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...Geophysical Mapping, Electromagnetic Induction, Instrument Verification Strip, Time Domain Electromagnetic, Unexploded Ordnance 16. SECURITY...Munitions Response QA Quality Assurance QC Quality Control ROC Receiver Operating Characteristic RTK Real- time Kinematic s Second SNR

  11. Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2016-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.

  12. Sleep staging with movement-related signals.

    PubMed

    Jansen, B H; Shankar, K

    1993-05-01

    Body movement related signals (i.e., activity due to postural changes and the ballistocardiac effort) were recorded from six normal volunteers using the static-charge-sensitive bed (SCSB). Visual sleep staging was performed on the basis of simultaneously recorded EEG, EMG and EOG signals. A statistical classification technique was used to determine if reliable sleep staging could be performed using only the SCSB signal. A classification rate of between 52% and 75% was obtained for sleep staging in the five conventional sleep stages and the awake state. These rates improved from 78% to 89% for classification between awake, REM and non-REM sleep and from 86% to 98% for awake versus asleep classification.

  13. Improvement of an algorithm for recognition of liveness using perspiration in fingerprint devices

    NASA Astrophysics Data System (ADS)

    Parthasaradhi, Sujan T.; Derakhshani, Reza; Hornak, Lawrence A.; Schuckers, Stephanie C.

    2004-08-01

    Previous work in our laboratory and others have demonstrated that spoof fingers made of a variety of materials including silicon, Play-Doh, clay, and gelatin (gummy finger) can be scanned and verified when compared to a live enrolled finger. Liveness, i.e. to determine whether the introduced biometric is coming from a live source, has been suggested as a means to circumvent attacks using spoof fingers. We developed a new liveness method based on perspiration changes in the fingerprint image. Recent results showed approximately 90% classification rate using different classification methods for various technologies including optical, electro-optical, and capacitive DC, a shorter time window and a diverse dataset. This paper focuses on improvement of the live classification rate by using a weight decay method during the training phase in order to improve the generalization and reduce the variance of the neural network based classifier. The dataset included fingerprint images from 33 live subjects, 33 spoofs created with dental impression material and Play-Doh, and fourteen cadaver fingers. 100% live classification was achieved with 81.8 to 100% spoof classification, depending on the device technology. The weight-decay method improves upon past reports by increasing the live and spoof classification rate.

  14. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    PubMed

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  15. Corn and soybean Landsat MSS classification performance as a function of scene characteristics

    NASA Technical Reports Server (NTRS)

    Batista, G. T.; Hixson, M. M.; Bauer, M. E.

    1982-01-01

    In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.

  16. A generalized approach for producing, quantifying, and validating citizen science data from wildlife images.

    PubMed

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Packer, Craig

    2016-06-01

    Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics-level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported "nothing here" for an image that was ultimately classified as containing an animal (fraction blank)-to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  17. A generalized approach for producing, quantifying, and validating citizen science data from wildlife images

    PubMed Central

    Kosmala, Margaret; Lintott, Chris; Packer, Craig

    2016-01-01

    Abstract Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large‐scale camera‐trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics—level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported “nothing here” for an image that was ultimately classified as containing an animal (fraction blank)—to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert‐verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post‐hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large‐scale monitoring of African wildlife. PMID:27111678

  18. Classification of orbital morphology for decompression surgery in Graves' orbitopathy: two-dimensional versus three-dimensional orbital parameters.

    PubMed

    Borumandi, Farzad; Hammer, Beat; Noser, Hansrudi; Kamer, Lukas

    2013-05-01

    Three-dimensional (3D) CT reconstruction of the bony orbit for accurate measurement and classification of the complex orbital morphology may not be suitable for daily practice. We present an easily measurable two-dimensional (2D) reference dataset of the bony orbit for study of individual orbital morphology prior to decompression surgery in Graves' orbitopathy. CT images of 70 European adults (140 orbits) with unaffected orbits were included. On axial views, the following orbital dimensions were assessed: orbital length (OL), globe length (GL), GL/OL ratio and cone angle. Postprocessed CT data were required to measure the corresponding 3D orbital parameters. The 2D and 3D orbital parameters were correlated. The 2D orbital parameters were significantly correlated to the corresponding 3D parameters (significant at the 0.01 level). The average GL was 25 mm (SD±1.0), the average OL was 42 mm (SD±2.0) and the average GL/OL ratio was 0.6 (SD±0.03). The posterior cone angle was, on average, 50.2° (SD±4.1). Three orbital sizes were classified: short (OL≤40 mm), medium (OL>40 to <45 mm) and large (OL≥45 mm). We present easily measurable reference data for the orbit that can be used for preoperative study and classification of individual orbital morphology. A short and shallow orbit may require a different decompression technique than a large and deep orbit. Prospective clinical trials are needed to demonstrate how individual orbital morphology affects the outcome of decompression surgery.

  19. Health behaviour modelling for prenatal diagnosis in Australia: a geodemographic framework for health service utilisation and policy development

    PubMed Central

    Muggli, Evelyne E; McCloskey, David; Halliday, Jane L

    2006-01-01

    Background Despite the wide availability of prenatal screening and diagnosis, a number of studies have reported no decrease in the rate of babies born with Down syndrome. The objective of this study was to investigate the geodemographic characteristics of women who have prenatal diagnosis in Victoria, Australia, by applying a novel consumer behaviour modelling technique in the analysis of health data. Methods A descriptive analysis of data on all prenatal diagnostic tests, births (1998 and 2002) and births of babies with Down syndrome (1998 to 2002) was undertaken using a Geographic Information System and socioeconomic lifestyle segmentation classifications. Results Most metropolitan women in Victoria have average or above State average levels of uptake of prenatal diagnosis. Inner city women residing in high socioeconomic lifestyle segments who have high rates of prenatal diagnosis spend 20% more on specialist physician's fees when compared to those whose rates are average. Rates of prenatal diagnosis are generally low amongst women in rural Victoria, with the lowest rates observed in farming districts. Reasons for this are likely to be a combination of lack of access to services (remoteness) and individual opportunity (lack of transportation, low levels of support and income). However, there are additional reasons for low uptake rates in farming areas that could not be explained by the behaviour modelling. These may relate to women's attitudes and choices. Conclusion A lack of statewide geodemographic consistency in uptake of prenatal diagnosis implies that there is a need to target health professionals and pregnant women in specific areas to ensure there is increased equity of access to services and that all pregnant women can make informed choices that are best for them. Equally as important is appropriate health service provision for families of children with Down syndrome. Our findings show that these potential interventions are particularly relevant in rural areas. Classifying data to lifestyle segments allowed for practical comparisons of the geodemographic characteristics of women having prenatal diagnosis in Australia at a population level. This methodology may in future be a feasible and cost-effective tool for service planners and policy developers. PMID:16945156

  20. 19 CFR 141.90 - Notation of tariff classification and value on invoice.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Notation of tariff classification and value on... classification and value on invoice. (a) [Reserved] (b) Classification and rate of duty. The importer or customs... invoice value which have been made to arrive at the aggregate entered value. In addition, the entered unit...

  1. Supervised classification of brain tissues through local multi-scale texture analysis by coupling DIR and FLAIR MR sequences

    NASA Astrophysics Data System (ADS)

    Poletti, Enea; Veronese, Elisa; Calabrese, Massimiliano; Bertoldo, Alessandra; Grisan, Enrico

    2012-02-01

    The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.

  2. Detection of motor imagery of swallow EEG signals based on the dual-tree complex wavelet transform and adaptive model selection

    NASA Astrophysics Data System (ADS)

    Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai

    2014-06-01

    Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.

  3. Rapid authentication of adulteration of olive oil by near-infrared spectroscopy using support vector machines

    NASA Astrophysics Data System (ADS)

    Wu, Jingzhu; Dong, Jingjing; Dong, Wenfei; Chen, Yan; Liu, Cuiling

    2016-10-01

    A classification method of support vector machines with linear kernel was employed to authenticate genuine olive oil based on near-infrared spectroscopy. There were three types of adulteration of olive oil experimented in the study. The adulterated oil was respectively soybean oil, rapeseed oil and the mixture of soybean and rapeseed oil. The average recognition rate of second experiment was more than 90% and that of the third experiment was reach to 100%. The results showed the method had good performance in classifying genuine olive oil and the adulteration with small variation range of adulterated concentration and it was a promising and rapid technique for the detection of oil adulteration and fraud in the food industry.

  4. Blind identification of image manipulation type using mixed statistical moments

    NASA Astrophysics Data System (ADS)

    Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu

    2015-01-01

    We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.

  5. AVHRR channel selection for land cover classification

    USGS Publications Warehouse

    Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.

    2002-01-01

    Mapping land cover of large regions often requires processing of satellite images collected from several time periods at many spectral wavelength channels. However, manipulating and processing large amounts of image data increases the complexity and time, and hence the cost, that it takes to produce a land cover map. Very few studies have evaluated the importance of individual Advanced Very High Resolution Radiometer (AVHRR) channels for discriminating cover types, especially the thermal channels (channels 3, 4 and 5). Studies rarely perform a multi-year analysis to determine the impact of inter-annual variability on the classification results. We evaluated 5 years of AVHRR data using combinations of the original AVHRR spectral channels (1-5) to determine which channels are most important for cover type discrimination, yet stabilize inter-annual variability. Particular attention was placed on the channels in the thermal portion of the spectrum. Fourteen cover types over the entire state of Colorado were evaluated using a supervised classification approach on all two-, three-, four- and five-channel combinations for seven AVHRR biweekly composite datasets covering the entire growing season for each of 5 years. Results show that all three of the major portions of the electromagnetic spectrum represented by the AVHRR sensor are required to discriminate cover types effectively and stabilize inter-annual variability. Of the two-channel combinations, channels 1 (red visible) and 2 (near-infrared) had, by far, the highest average overall accuracy (72.2%), yet the inter-annual classification accuracies were highly variable. Including a thermal channel (channel 4) significantly increased the average overall classification accuracy by 5.5% and stabilized interannual variability. Each of the thermal channels gave similar classification accuracies; however, because of the problems in consistently interpreting channel 3 data, either channel 4 or 5 was found to be a more appropriate choice. Substituting the thermal channel with a single elevation layer resulted in equivalent classification accuracies and inter-annual variability.

  6. Classification of octet AB-type binary compounds using dynamical charges: A materials informatics perspective

    DOE PAGES

    Pilania, G.; Gubernatis, J. E.; Lookman, T.

    2015-12-03

    The role of dynamical (or Born effective) charges in classification of octet AB-type binary compounds between four-fold (zincblende/wurtzite crystal structures) and six-fold (rocksalt crystal structure) coordinated systems is discussed. We show that the difference in the dynamical charges of the fourfold and sixfold coordinated structures, in combination with Harrison’s polarity, serves as an excellent feature to classify the coordination of 82 sp–bonded binary octet compounds. We use a support vector machine classifier to estimate the average classification accuracy and the associated variance in our model where a decision boundary is learned in a supervised manner. Lastly, we compare the out-of-samplemore » classification accuracy achieved by our feature pair with those reported previously.« less

  7. 34 CFR 222.68 - What tax rates does the Secretary use if two or more different classifications of real property...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What tax rates does the Secretary use if two or more different classifications of real property are taxed at different rates? 222.68 Section 222.68 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION...

  8. A Comparison of Tactical Leader Decision Making Between Automated and Live Counterparts in a Virtual Environment

    DTIC Science & Technology

    2014-06-01

    Scott A. Patton June 2014 Thesis Advisor: Quinn Kennedy Second Reader: Jonathan Alt THIS PAGE INTENTIONALLY LEFT BLANK i REPORT...DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden for this collection of information is estimated to average 1 hour per response...Robotic Integration 15. NUMBER OF PAGES 119 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF

  9. Semantic labeling of digital photos by classification

    NASA Astrophysics Data System (ADS)

    Ciocca, Gianluigi; Cusano, Claudio; Schettini, Raimondo; Brambilla, Carla

    2003-01-01

    The paper addresses the problem of annotating photographs with broad semantic labels. To cope with the great variety of photos available on the WEB we have designed a hierarchical classification strategy which first classifies images as pornographic or not-pornographic. Not-pornographic images are then classified as indoor, outdoor, or close-up. On a database of over 9000 images, mostly downloaded from the web, our method achieves an average accuracy of close to 90%.

  10. Automatic breast density classification using a convolutional neural network architecture search procedure

    NASA Astrophysics Data System (ADS)

    Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin

    2015-03-01

    Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.

  11. Cooperative Learning for Distributed In-Network Traffic Classification

    NASA Astrophysics Data System (ADS)

    Joseph, S. B.; Loo, H. R.; Ismail, I.; Andromeda, T.; Marsono, M. N.

    2017-04-01

    Inspired by the concept of autonomic distributed/decentralized network management schemes, we consider the issue of information exchange among distributed network nodes to network performance and promote scalability for in-network monitoring. In this paper, we propose a cooperative learning algorithm for propagation and synchronization of network information among autonomic distributed network nodes for online traffic classification. The results show that network nodes with sharing capability perform better with a higher average accuracy of 89.21% (sharing data) and 88.37% (sharing clusters) compared to 88.06% for nodes without cooperative learning capability. The overall performance indicates that cooperative learning is promising for distributed in-network traffic classification.

  12. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images.

    PubMed

    Díaz, Gloria; González, Fabio A; Romero, Eduardo

    2009-04-01

    Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.

  13. Analysis of calibrated seafloor backscatter for habitat classification methodology and case study of 158 spots in the Bay of Biscay and Celtic Sea

    NASA Astrophysics Data System (ADS)

    Fezzani, Ridha; Berger, Laurent

    2018-06-01

    An automated signal-based method was developed in order to analyse the seafloor backscatter data logged by calibrated multibeam echosounder. The processing consists first in the clustering of each survey sub-area into a small number of homogeneous sediment types, based on the backscatter average level at one or several incidence angles. Second, it uses their local average angular response to extract discriminant descriptors, obtained by fitting the field data to the Generic Seafloor Acoustic Backscatter parametric model. Third, the descriptors are used for seafloor type classification. The method was tested on the multi-year data recorded by a calibrated 90-kHz Simrad ME70 multibeam sonar operated in the Bay of Biscay, France and Celtic Sea, Ireland. It was applied for seafloor-type classification into 12 classes, to a dataset of 158 spots surveyed for demersal and benthic fauna study and monitoring. Qualitative analyses and classified clusters using extracted parameters show a good discriminatory potential, indicating the robustness of this approach.

  14. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  15. Determination of geographical origin of alcoholic beverages using ultraviolet, visible and infrared spectroscopy: A review

    NASA Astrophysics Data System (ADS)

    Uríčková, Veronika; Sádecká, Jana

    2015-09-01

    The identification of the geographical origin of beverages is one of the most important issues in food chemistry. Spectroscopic methods provide a relative rapid and low cost alternative to traditional chemical composition or sensory analyses. This paper reviews the current state of development of ultraviolet (UV), visible (Vis), near infrared (NIR) and mid infrared (MIR) spectroscopic techniques combined with pattern recognition methods for determining geographical origin of both wines and distilled drinks. UV, Vis, and NIR spectra contain broad band(s) with weak spectral features limiting their discrimination ability. Despite this expected shortcoming, each of the three spectroscopic ranges (NIR, Vis/NIR and UV/Vis/NIR) provides average correct classification higher than 82%. Although average correct classification is similar for NIR and MIR regions, in some instances MIR data processing improves prediction. Advantage of using MIR is that MIR peaks are better defined and more easily assigned than NIR bands. In general, success in a classification depends on both spectral range and pattern recognition methods. The main problem still remains the construction of databanks needed for all of these methods.

  16. Caesarean Section in Peru: Analysis of Trends Using the Robson Classification System

    PubMed Central

    2016-01-01

    Introduction Cesarean section rates continue to increase worldwide while the reasons appear to be multiple, complex and, in many cases, country specific. Over the last decades, several classification systems for caesarean section have been created and proposed to monitor and compare caesarean section rates in a standardized, reliable, consistent and action-oriented manner with the aim to understand the drivers and contributors of this trend. The aims of the present study were to conduct an analysis in the three Peruvian geographical regions to assess levels and trends of delivery by caesarean section using the Robson classification for caesarean section, identify the groups of women with highest caesarean section rates and assess variation of maternal and perinatal outcomes according to caesarean section levels in each group over time. Material and Methods Data from 549,681 pregnant women included in the Peruvian Perinatal Information System database from 43 maternal facilities in three Peruvian geographical regions from 2000 and 2010 were studied. The data were analyzed using the Robson classification and women were studied in the ten groups in the classification. Cochran-Armitage test was used to evaluate time trends in the rates of caesarean section rates and; logistic regression was used to evaluate risk for each classification. Results The caesarean section rate was 27% and a yearly increase in the overall caesarean section rates from 2000 to 2010 from 23.5% to 30% (time trend p<0.001) was observed. Robson groups 1, 3 (nulliparous and multiparas, respectively, with a single cephalic term pregnancy in spontaneous labour), 5 (multiparas with a previous uterine scar with a single, cephalic, term pregnancy) and 7 (multiparas with a single breech pregnancy with or without previous scars) showed an increase in the caesarean section rates over time. Robson groups 1 and 3 were significantly associated with stillbirths (OR 1.43, CI95% 1.17–1.72; OR 3.53, CI95% 2.95–4.2) and maternal mortality (OR 3.39, CI95% 1.59–7.22; OR 8.05, CI95% 3.34–19.41). Discussion The caesarean section rates increased in the last years as result of increased CS in groups with spontaneous labor and in-group of multiparas with a scarred uterus. Women included in groups 1 y 3 were associated to maternal perinatal complications. Women with previous cesarean section constitute the most important determinant of overall cesarean section rates. The use of Robson classification becomes an useful tool for monitoring cesarean section in low human development index countries. PMID:26840693

  17. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    PubMed

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  18. Understanding the local public health workforce: labels versus substance.

    PubMed

    Merrill, Jacqueline A; Keeling, Jonathan W

    2014-11-01

    The workforce is a key component of the nation's public health (PH) infrastructure, but little is known about the skills of local health department (LHD) workers to guide policy and planning. To profile a sample of LHD workers using classification schemes for PH work (the substance of what is done) and PH job titles (the labeling of what is done) to determine if work content is consistent with job classifications. A secondary analysis was conducted on data collected from 2,734 employees from 19 LHDs using a taxonomy of 151 essential tasks performed, knowledge possessed, and resources available. Each employee was classified by job title using a schema developed by PH experts. The inter-rater agreement was calculated within job classes and congruence on tasks, knowledge, and resources for five exemplar classes was examined. The average response rate was 89%. Overall, workers exhibited moderate agreement on tasks and poor agreement on knowledge and resources. Job classes with higher agreement included agency directors and community workers; those with lower agreement were mid-level managers such as program directors. Findings suggest that local PH workers within a job class perform similar tasks but vary in training and access to resources. Job classes that are specific and focused have higher agreement whereas job classes that perform in many roles show less agreement. The PH worker classification may not match employees' skill sets or how LHDs allocate resources, which may be a contributor to unexplained fluctuation in public health system performance. Copyright © 2014. Published by Elsevier Inc.

  19. Does semi-automatic bone-fragment segmentation improve the reproducibility of the Letournel acetabular fracture classification?

    PubMed

    Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J

    2017-09-01

    The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Effects of uncertainty and variability on population declines and IUCN Red List classifications.

    PubMed

    Rueda-Cediel, Pamela; Anderson, Kurt E; Regan, Tracey J; Regan, Helen M

    2018-01-22

    The International Union for Conservation of Nature (IUCN) Red List Categories and Criteria is a quantitative framework for classifying species according to extinction risk. Population models may be used to estimate extinction risk or population declines. Uncertainty and variability arise in threat classifications through measurement and process error in empirical data and uncertainty in the models used to estimate extinction risk and population declines. Furthermore, species traits are known to affect extinction risk. We investigated the effects of measurement and process error, model type, population growth rate, and age at first reproduction on the reliability of risk classifications based on projected population declines on IUCN Red List classifications. We used an age-structured population model to simulate true population trajectories with different growth rates, reproductive ages and levels of variation, and subjected them to measurement error. We evaluated the ability of scalar and matrix models parameterized with these simulated time series to accurately capture the IUCN Red List classification generated with true population declines. Under all levels of measurement error tested and low process error, classifications were reasonably accurate; scalar and matrix models yielded roughly the same rate of misclassifications, but the distribution of errors differed; matrix models led to greater overestimation of extinction risk than underestimations; process error tended to contribute to misclassifications to a greater extent than measurement error; and more misclassifications occurred for fast, rather than slow, life histories. These results indicate that classifications of highly threatened taxa (i.e., taxa with low growth rates) under criterion A are more likely to be reliable than for less threatened taxa when assessed with population models. Greater scrutiny needs to be placed on data used to parameterize population models for species with high growth rates, particularly when available evidence indicates a potential transition to higher risk categories. © 2018 Society for Conservation Biology.

  1. The new Epstein gleason score classification significantly reduces upgrading in prostate cancer patients.

    PubMed

    De Nunzio, Cosimo; Pastore, Antonio Luigi; Lombardo, Riccardo; Simone, Giuseppe; Leonardo, Costantino; Mastroianni, Riccardo; Collura, Devis; Muto, Giovanni; Gallucci, Michele; Carbone, Antonio; Fuschi, Andrea; Dutto, Lorenzo; Witt, Joern Heinrich; De Dominicis, Carlo; Tubaro, Andrea

    2018-06-01

    To evaluate the differences between the old and the new Gleason score classification systems in upgrading and downgrading rates. Between 2012 and 2015, we identified 9703 patients treated with retropubic radical prostatectomy (RP) in four tertiary centers. Biopsy specimens as well as radical prostatectomy specimens were graded according to both 2005 Gleason and 2014 ISUP five-tier Gleason grading system (five-tier GG system). Upgrading and downgrading rates on radical prostatectomy were first recorded for both classifications and then compared. The accuracy of the biopsy for each histological classification was determined by using the kappa coefficient of agreement and by assessing sensitivity, specificity, positive and negative predictive value. The five-tier GG system presented a lower clinically significant upgrading rate (1895/9703: 19,5% vs 2332/9703:24.0%; p = .001) and a similar clinically significant downgrading rate (756/9703: 7,7% vs 779/9703: 8%; p = .267) when compared to the 2005 ISUP classification. When evaluating their accuracy, the new five-tier GG system presented a better specificity (91% vs 83%) and a better negative predictive value (78% vs 60%). The kappa-statistics measures of agreement between needle biopsy and radical prostatectomy specimens were poor and good respectively for the five-tier GG system and for the 2005 Gleason score (k = 0.360 ± 0.007 vs k = 0.426 ± 0.007). The new Epstein classification significantly reduces upgrading events. The implementation of this new classification could better define prostate cancer aggressiveness with important clinical implications, particularly in prostate cancer management. Copyright © 2018 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  2. Classification of the intention to generate a shoulder versus elbow torque by means of a time frequency synthesized spatial patterns BCI algorithm

    NASA Astrophysics Data System (ADS)

    Deng, Jie; Yao, Jun; Dewald, Julius P. A.

    2005-12-01

    In this paper, we attempt to determine a subject's intention of generating torque at the shoulder or elbow, two neighboring joints, using scalp electroencephalogram signals from 163 electrodes for a brain-computer interface (BCI) application. To achieve this goal, we have applied a time-frequency synthesized spatial patterns (TFSP) BCI algorithm with a presorting procedure. Using this method, we were able to achieve an average recognition rate of 89% in four healthy subjects, which is comparable to the highest rates reported in the literature but now for tasks with much closer spatial representations on the motor cortex. This result demonstrates, for the first time, that the TFSP BCI method can be applied to separate intentions between generating static shoulder versus elbow torque. Furthermore, in this study, the potential application of this BCI algorithm for brain-injured patients was tested in one chronic hemiparetic stroke subject. A recognition rate of 76% was obtained, suggesting that this BCI method can provide a potential control signal for neural prostheses or other movement coordination improving devices for patients following brain injury.

  3. Classification of urban features using airborne hyperspectral data

    NASA Astrophysics Data System (ADS)

    Ganesh Babu, Bharath

    Accurate mapping and modeling of urban environments are critical for their efficient and successful management. Superior understanding of complex urban environments is made possible by using modern geospatial technologies. This research focuses on thematic classification of urban land use and land cover (LULC) using 248 bands of 2.0 meter resolution hyperspectral data acquired from an airborne imaging spectrometer (AISA+) on 24th July 2006 in and near Terre Haute, Indiana. Three distinct study areas including two commercial classes, two residential classes, and two urban parks/recreational classes were selected for classification and analysis. Four commonly used classification methods -- maximum likelihood (ML), extraction and classification of homogeneous objects (ECHO), spectral angle mapper (SAM), and iterative self organizing data analysis (ISODATA) - were applied to each data set. Accuracy assessment was conducted and overall accuracies were compared between the twenty four resulting thematic maps. With the exception of SAM and ISODATA in a complex commercial area, all methods employed classified the designated urban features with more than 80% accuracy. The thematic classification from ECHO showed the best agreement with ground reference samples. The residential area with relatively homogeneous composition was classified consistently with highest accuracy by all four of the classification methods used. The average accuracy amongst the classifiers was 93.60% for this area. When individually observed, the complex recreational area (Deming Park) was classified with the highest accuracy by ECHO, with an accuracy of 96.80% and 96.10% Kappa. The average accuracy amongst all the classifiers was 92.07%. The commercial area with relatively high complexity was classified with the least accuracy by all classifiers. The lowest accuracy was achieved by SAM at 63.90% with 59.20% Kappa. This was also the lowest accuracy in the entire analysis. This study demonstrates the potential for using the visible and near infrared (VNIR) bands from AISA+ hyperspectral data in urban LULC classification. Based on their performance, the need for further research using ECHO and SAM is underscored. The importance incorporating imaging spectrometer data in high resolution urban feature mapping is emphasized.

  4. Application of visible and near-infrared spectroscopy to classification of Miscanthus species

    DOE PAGES

    Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; ...

    2017-04-03

    Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less

  5. Application of visible and near-infrared spectroscopy to classification of Miscanthus species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang

    Here, the feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validationmore » results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.« less

  6. Application of visible and near-infrared spectroscopy to classification of Miscanthus species.

    PubMed

    Jin, Xiaoli; Chen, Xiaoling; Xiao, Liang; Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J; Peng, Junhua

    2017-01-01

    The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species.

  7. Application of visible and near-infrared spectroscopy to classification of Miscanthus species

    PubMed Central

    Shi, Chunhai; Chen, Liang; Yu, Bin; Yi, Zili; Yoo, Ji Hye; Heo, Kweon; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J.; Peng, Junhua

    2017-01-01

    The feasibility of visible and near infrared (NIR) spectroscopy as tool to classify Miscanthus samples was explored in this study. Three types of Miscanthus plants, namely, M. sinensis, M. sacchariflorus and M. fIoridulus, were analyzed using a NIR spectrophotometer. Several classification models based on the NIR spectra data were developed using line discriminated analysis (LDA), partial least squares (PLS), least squares support vector machine regression (LSSVR), radial basis function (RBF) and neural network (NN). The principal component analysis (PCA) presented rough classification with overlapping samples, while the models of Line_LSSVR, RBF_LSSVR and RBF_NN presented almost same calibration and validation results. Due to the higher speed of Line_LSSVR than RBF_LSSVR and RBF_NN, we selected the line_LSSVR model as a representative. In our study, the model based on line_LSSVR showed higher accuracy than LDA and PLS models. The total correct classification rates of 87.79 and 96.51% were observed based on LDA and PLS model in the testing set, respectively, while the line_LSSVR showed 99.42% of total correct classification rate. Meanwhile, the lin_LSSVR model in the testing set showed correct classification rate of 100, 100 and 96.77% for M. sinensis, M. sacchariflorus and M. fIoridulus, respectively. The lin_LSSVR model assigned 99.42% of samples to the right groups, except one M. fIoridulus sample. The results demonstrated that NIR spectra combined with a preliminary morphological classification could be an effective and reliable procedure for the classification of Miscanthus species. PMID:28369059

  8. How Well Do Molecular and Pedigree Relatedness Correspond, in Populations with Diverse Mating Systems, and Various Types and Quantities of Molecular and Demographic Data?

    PubMed

    Kopps, Anna M; Kang, Jungkoo; Sherwin, William B; Palsbøll, Per J

    2015-06-30

    Kinship analyses are important pillars of ecological and conservation genetic studies with potentially far-reaching implications. There is a need for power analyses that address a range of possible relationships. Nevertheless, such analyses are rarely applied, and studies that use genetic-data-based-kinship inference often ignore the influence of intrinsic population characteristics. We investigated 11 questions regarding the correct classification rate of dyads to relatedness categories (relatedness category assignments; RCA) using an individual-based model with realistic life history parameters. We investigated the effects of the number of genetic markers; marker type (microsatellite, single nucleotide polymorphism SNP, or both); minor allele frequency; typing error; mating system; and the number of overlapping generations under different demographic conditions. We found that (i) an increasing number of genetic markers increased the correct classification rate of the RCA so that up to >80% first cousins can be correctly assigned; (ii) the minimum number of genetic markers required for assignments with 80 and 95% correct classifications differed between relatedness categories, mating systems, and the number of overlapping generations; (iii) the correct classification rate was improved by adding additional relatedness categories and age and mitochondrial DNA data; and (iv) a combination of microsatellite and single-nucleotide polymorphism data increased the correct classification rate if <800 SNP loci were available. This study shows how intrinsic population characteristics, such as mating system and the number of overlapping generations, life history traits, and genetic marker characteristics, can influence the correct classification rate of an RCA study. Therefore, species-specific power analyses are essential for empirical studies. Copyright © 2015 Kopps et al.

  9. Analysis and Recognition of Traditional Chinese Medicine Pulse Based on the Hilbert-Huang Transform and Random Forest in Patients with Coronary Heart Disease

    PubMed Central

    Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie

    2015-01-01

    Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation. PMID:26180536

  10. Analysis and Recognition of Traditional Chinese Medicine Pulse Based on the Hilbert-Huang Transform and Random Forest in Patients with Coronary Heart Disease.

    PubMed

    Guo, Rui; Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie

    2015-01-01

    Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation.

  11. Portable bacterial identification system based on elastic light scatter patterns.

    PubMed

    Bae, Euiwon; Ying, Dawei; Kramer, Donald; Patsekin, Valery; Rajwa, Bartek; Holdman, Cheryl; Sturgis, Jennifer; Davisson, V Jo; Robinson, J Paul

    2012-08-28

    Conventional diagnosis and identification of bacteria requires shipment of samples to a laboratory for genetic and biochemical analysis. This process can take days and imposes significant delay to action in situations where timely intervention can save lives and reduce associated costs. To enable faster response to an outbreak, a low-cost, small-footprint, portable microbial-identification instrument using forward scatterometry has been developed. This device, weighing 9 lb and measuring 12 × 6 × 10.5 in., utilizes elastic light scatter (ELS) patterns to accurately capture bacterial colony characteristics and delivers the classification results via wireless access. The overall system consists of two CCD cameras, one rotational and one translational stage, and a 635-nm laser diode. Various software algorithms such as Hough transform, 2-D geometric moments, and the traveling salesman problem (TSP) have been implemented to provide colony count and circularity, centering process, and minimized travel time among colonies. Experiments were conducted with four bacteria genera using pure and mixed plate and as proof of principle a field test was conducted in four different locations where the average classification rate ranged between 95 and 100%.

  12. Sitting Posture Monitoring System Based on a Low-Cost Load Cell Using Machine Learning

    PubMed Central

    Roh, Jongryun; Park, Hyeong-jun; Lee, Kwang Jin; Hyeong, Joonho; Kim, Sayup

    2018-01-01

    Sitting posture monitoring systems (SPMSs) help assess the posture of a seated person in real-time and improve sitting posture. To date, SPMS studies reported have required many sensors mounted on the backrest plate and seat plate of a chair. The present study, therefore, developed a system that measures a total of six sitting postures including the posture that applied a load to the backrest plate, with four load cells mounted only on the seat plate. Various machine learning algorithms were applied to the body weight ratio measured by the developed SPMS to identify the method that most accurately classified the actual sitting posture of the seated person. After classifying the sitting postures using several classifiers, average and maximum classification rates of 97.20% and 97.94%, respectively, were obtained from nine subjects with a support vector machine using the radial basis function kernel; the results obtained by this classifier showed a statistically significant difference from the results of multiple classifications using other classifiers. The proposed SPMS was able to classify six sitting postures including the posture with loading on the backrest and showed the possibility of classifying the sitting posture even though the number of sensors is reduced. PMID:29329261

  13. Polyphonic sonification of electrocardiography signals for diagnosis of cardiac pathologies

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Hermann, Thomas; Bukschat, Yannick; Kramer, Tilmann; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-03-01

    Electrocardiography (ECG) data are multidimensional temporal data with ubiquitous applications in the clinic. Conventionally, these data are presented visually. It is presently unclear to what degree data sonification (auditory display), can enable the detection of clinically relevant cardiac pathologies in ECG data. In this study, we introduce a method for polyphonic sonification of ECG data, whereby different ECG channels are simultaneously represented by sound of different pitch. We retrospectively applied this method to 12 samples from a publicly available ECG database. We and colleagues from our professional environment then analyzed these data in a blinded way. Based on these analyses, we found that the sonification technique can be intuitively understood after a short training session. On average, the correct classification rate for observers trained in cardiology was 78%, compared to 68% and 50% for observers not trained in cardiology or not trained in medicine at all, respectively. These values compare to an expected random guessing performance of 25%. Strikingly, 27% of all observers had a classification accuracy over 90%, indicating that sonification can be very successfully used by talented individuals. These findings can serve as a baseline for potential clinical applications of ECG sonification.

  14. Forestry Expansion during the Last Decades in the Paraiba do Sul Basin - Brazil

    NASA Astrophysics Data System (ADS)

    Carriello, F.; Rezende, F. S.; Neves, O. M. S.; Rodriguez, D. A.

    2016-06-01

    from 1986 to 2010. In this region is situated the most important and largest extension of reminiscent of Mata Atlântica Biome reminiscent. This biome has been one the most exploited Brazilian biome since 1500, when Brazilian colonization begun. To achieve this goal, we use the GIS "SPRING" and images from Landsat 5 Satellite, TM sensor from 1986, 1990, 1995, 2000, 2005 and 2010, distributed by the Brazilian National Institute for Space Research - INPE. The non-supervised-classification was applied to images in order to produce land use and land cover maps. After that, we intersect each classification for each date with the precedent date, so we can analyze the paths of each land use change, focusing forestry expansion in native's Mata Atlântica areas. The results show that eucalyptus plantations in the region have expanded mostly over fragments of Mata Atlântica. About 99.389 hectares of Mata Atlântica were transformed into forestry in 25 years, an average rate of 4000 ha per year. Clear-cut was largest between 1990 and 1995, when 22810 hectares of rain forest were cut, and between 1995 and 2000, when 21430 hectares were cut.

  15. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    PubMed

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  16. Optimization of the ANFIS using a genetic algorithm for physical work rate classification.

    PubMed

    Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali

    2018-03-13

    Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.

  17. Outcomes of an antimicrobial control program in a teaching hospital.

    PubMed

    Gentry, C A; Greenfield, R A; Slater, L N; Wack, M; Huycke, M M

    2000-02-01

    The clinical outcomes and cost-effectiveness of an antimicrobial control program (ACP) were studied. The impact of an ACP in a teaching hospital was analyzed by comparing clinical outcomes and intravenous antimicrobial costs over two two-year periods, the two years before the program and the first two years after the program's inception. Admission baseline data, length of stay, mortality, and readmission rates were gathered for each patient. Patients were identified by using the International Classification of Diseases. Multivariate logistic regression models were constructed for mortality and for lengths of stay of 12 or more days. The acquisition costs of intravenous antimicrobial agents for the second baseline year and the entire program period were tabulated and compared. The average daily inpatient census was determined. The ACP was associated with a 2.4-day decrease in length of stay and a reduction in mortality from 8.28% to 6.61%. Rates of readmission for infection within 30 days of discharge remained about the same. Inpatient pharmacy costs other than intravenous antimicrobials decreased an average of only 5.7% over the two program years, but the acquisition cost of intravenous antimicrobials for both program years yielded a total cost saving of $291,885, a reduction of 30.8%. The institution's average daily census fell 19% between the second baseline year and the second program year. An ACP directed by a clinical pharmacist trained in infectious diseases was associated with improvements in inpatient length of stay and mortality. The ACP decreased intravenous antimicrobial costs and facilitated the approval process for restricted and nonformulary antimicrobial agents.

  18. Learning semantic histopathological representation for basal cell carcinoma classification

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  19. Hyperspectral image classification by a variable interval spectral average and spectral curve matching combined algorithm

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der

    2010-08-01

    Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.

  20. Classification of complementary and alternative medical practices: Family physicians' ratings of effectiveness.

    PubMed

    Fries, Christopher J

    2008-11-01

    ABSTRACTOBJECTIVETo develop a classification of complementary and alternative medicine (CAM) practices widely available in Canada based on physicians' effectiveness ratings of the therapies.DESIGNA self-administered postal questionnaire asking family physicians to rate their "belief in the degree of therapeutic effectiveness" of 15 CAM therapies.SETTINGProvince of Alberta.PARTICIPANTSA total of 875 family physicians.MAIN OUTCOME MEASURESDescriptive statistics of physicians' awareness of and effectiveness ratings for each of the therapies; factor analysis was applied to the ratings of the 15 therapies in order to explore whether or not the data support the proposed classification of CAM practices into categories of accepted and rejected.RESULTSPhysicians believed that acupuncture, massage therapy, chiropractic care, relaxation therapy, biofeedback, and spiritual or religious healing were effective when used in conjunction with biomedicine to treat chronic or psychosomatic indications. Physicians attributed little effectiveness to homeopathy or naturopathy, Feldenkrais or Alexander technique, Rolfing, herbal medicine, traditional Chinese medicine, and reflexology. The factor analysis revealed an underlying dimensionality to physicians' effectiveness ratings of the CAM therapies that supports the classification of these practices as either accepted or rejected.CONCLUSIONThis study provides Canadian family physicians with information concerning which CAM therapies are generally accepted by their peers as effective and which are not.

  1. The Rural Inpatient Mortality Study: Does Urban-Rural County Classification Predict Hospital Mortality in California?

    PubMed

    Linnen, Daniel T; Kornak, John; Stephens, Caroline

    2018-03-28

    Evidence suggests an association between rurality and decreased life expectancy. To determine whether rural hospitals have higher hospital mortality, given that very sick patients may be transferred to regional hospitals. In this ecologic study, we combined Medicare hospital mortality ratings (N = 1267) with US census data, critical access hospital classification, and National Center for Health Statistics urban-rural county classifications. Ratings included mortality for coronary artery bypass grafting, stroke, chronic obstructive pulmonary disease, heart attack, heart failure, and pneumonia across 277 California hospitals between July 2011 and June 2014. We used generalized estimating equations to evaluate the association of urban-rural county classifications on mortality ratings. Unfavorable Medicare hospital mortality rating "worse than the national rate" compared with "better" or "same." Compared with large central "metro" (metropolitan) counties, hospitals in medium-sized metro counties had 6.4 times the odds of rating "worse than the national rate" for hospital mortality (95% confidence interval = 2.8-14.8, p < 0.001). For hospitals in small metro counties, the odds of having such a rating were 3.7 times greater (95% confidence interval = 0.7-23.4, p = 0.12), although not statistically significant. Few ratings were provided for rural counties, and analysis of rural counties was underpowered. Hospitals in medium-sized metro counties are associated with unfavorable Medicare mortality ratings, but current methods to assign mortality ratings may hinder fair comparisons. Patient transfers from rural locations to regional medical centers may contribute to these results, a potential factor that future research should examine.

  2. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    PubMed

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  3. Assessing LULC changes over Chilika Lake watershed in Eastern India using Driving Force Analysis

    NASA Astrophysics Data System (ADS)

    Jadav, S.; Syed, T. H.

    2017-12-01

    Rapid population growth and industrial development has brought about significant changes in Land Use Land Cover (LULC) of many developing countries in the world. This study investigates LULC changes in the Chilika Lake watershed of Eastern India for the period of 1988 to 2016. The methodology involves pre-processing and classification of Landsat satellite images using support vector machine (SVM) supervised classification algorithm. Results reveal that `Cropland', `Emergent Vegetation' and `Settlement' has expanded over the study period by 284.61 km², 106.83 km² and 98.83 km² respectively. Contemporaneously, `Lake Area', `Vegetation' and `Scrub Land' have decreased by 121.62 km², 96.05 km² and 80.29 km² respectively. This study also analyzes five major driving force variables of socio-economic and climatological factors triggering LULC changes through a bivariate logistic regression model. The outcome gives credible relative operating characteristics (ROC) value of 0.76 that indicate goodness fit of logistic regression model. In addition, independent variables like distance to drainage network and average annual rainfall have negative regression coefficient values that represent decreased rate of dependent variable (changed LULC) whereas independent variables (population density, distance to road and distance to railway) have positive regression coefficient indicates increased rate of changed LULC . Results from this study will be crucial for planning and restoration of this vital lake water body that has major implications over the society and environment at large.

  4. 26 CFR 48.4071-2 - Determination of weight.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... EXCISE TAXES MANUFACTURERS AND RETAILERS EXCISE TAXES Motor Vehicles, Tires, Tubes, Tread Rubber, and... each type, size, grade, and classification. The average weights must be established in accordance with...

  5. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  6. Galaxy Zoo: quantitative visual morphological classifications for 48 000 galaxies from CANDELS

    NASA Astrophysics Data System (ADS)

    Simmons, B. D.; Lintott, Chris; Willett, Kyle W.; Masters, Karen L.; Kartaltepe, Jeyhan S.; Häußler, Boris; Kaviraj, Sugata; Krawczyk, Coleman; Kruk, S. J.; McIntosh, Daniel H.; Smethurst, R. J.; Nichol, Robert C.; Scarlata, Claudia; Schawinski, Kevin; Conselice, Christopher J.; Almaini, Omar; Ferguson, Henry C.; Fortson, Lucy; Hartley, William; Kocevski, Dale; Koekemoer, Anton M.; Mortlock, Alice; Newman, Jeffrey A.; Bamford, Steven P.; Grogin, N. A.; Lucas, Ray A.; Hathi, Nimish P.; McGrath, Elizabeth; Peth, Michael; Pforr, Janine; Rizer, Zachary; Wuyts, Stijn; Barro, Guillermo; Bell, Eric F.; Castellano, Marco; Dahlen, Tomas; Dekel, Avishai; Ownsworth, Jamie; Faber, Sandra M.; Finkelstein, Steven L.; Fontana, Adriano; Galametz, Audrey; Grützbauch, Ruth; Koo, David; Lotz, Jennifer; Mobasher, Bahram; Mozena, Mark; Salvato, Mara; Wiklind, Tommy

    2017-02-01

    We present quantified visual morphologies of approximately 48 000 galaxies observed in three Hubble Space Telescope legacy fields by the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) and classified by participants in the Galaxy Zoo project. 90 per cent of galaxies have z ≤ 3 and are observed in rest-frame optical wavelengths by CANDELS. Each galaxy received an average of 40 independent classifications, which we combine into detailed morphological information on galaxy features such as clumpiness, bar instabilities, spiral structure, and merger and tidal signatures. We apply a consensus-based classifier weighting method that preserves classifier independence while effectively down-weighting significantly outlying classifications. After analysing the effect of varying image depth on reported classifications, we also provide depth-corrected classifications which both preserve the information in the deepest observations and also enable the use of classifications at comparable depths across the full survey. Comparing the Galaxy Zoo classifications to previous classifications of the same galaxies shows very good agreement; for some applications, the high number of independent classifications provided by Galaxy Zoo provides an advantage in selecting galaxies with a particular morphological profile, while in others the combination of Galaxy Zoo with other classifications is a more promising approach than using any one method alone. We combine the Galaxy Zoo classifications of `smooth' galaxies with parametric morphologies to select a sample of featureless discs at 1 ≤ z ≤ 3, which may represent a dynamically warmer progenitor population to the settled disc galaxies seen at later epochs.

  7. Statistical sensor fusion of ECG data using automotive-grade sensors

    NASA Astrophysics Data System (ADS)

    Koenig, A.; Rehg, T.; Rasshofer, R.

    2015-11-01

    Driver states such as fatigue, stress, aggression, distraction or even medical emergencies continue to be yield to severe mistakes in driving and promote accidents. A pathway towards improving driver state assessment can be found in psycho-physiological measures to directly quantify the driver's state from physiological recordings. Although heart rate is a well-established physiological variable that reflects cognitive stress, obtaining heart rate contactless and reliably is a challenging task in an automotive environment. Our aim was to investigate, how sensory fusion of two automotive grade sensors would influence the accuracy of automatic classification of cognitive stress levels. We induced cognitive stress in subjects and estimated levels from their heart rate signals, acquired from automotive ready ECG sensors. Using signal quality indices and Kalman filters, we were able to decrease Root Mean Squared Error (RMSE) of heart rate recordings by 10 beats per minute. We then trained a neural network to classify the cognitive workload state of subjects from heart rate and compared classification performance for ground truth, the individual sensors and the fused heart rate signal. We obtained an increase of 5 % higher correct classification by fusing signals as compared to individual sensors, staying only 4 % below the maximally possible classification accuracy from ground truth. These results are a first step towards real world applications of psycho-physiological measurements in vehicle settings. Future implementations of driver state modeling will be able to draw from a larger pool of data sources, such as additional physiological values or vehicle related data, which can be expected to drive classification to significantly higher values.

  8. 29 CFR 697.2 - Industry wage rates and effective dates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of goods for commerce, as these terms are defined in section 3 of the Fair Labor Standards Act of... classifications in which such employee is engaged. Industry Minimum wage EffectiveOctober 3, 2005 EffectiveOctober...) Classification A 4.09 4.09 4.09 (2) Classification B 3.92 3.92 3.92 (3) Classification C 3.88 3.88 3.88 (e...

  9. Classification Accuracy and Acceptability of the Integrated Screening and Intervention System Teacher Rating Form

    ERIC Educational Resources Information Center

    Daniels, Brian; Volpe, Robert J.; Fabiano, Gregory A.; Briesch, Amy M.

    2017-01-01

    This study examines the classification accuracy and teacher acceptability of a problem-focused screener for academic and disruptive behavior problems, which is directly linked to evidence-based intervention. Participants included 39 classroom teachers from 2 public school districts in the Northeastern United States. Teacher ratings were obtained…

  10. An Examination of the Changing Rates of Autism in Special Education

    ERIC Educational Resources Information Center

    Brock, Stephen E.

    2006-01-01

    Using U.S. Department of Education data, the current study examined changes in the rates of special education eligibility classifications. This was done to determine if classification substitution might be an explanation for increases in the number of students being found eligible for special education using the Autism criteria. Results reveal…

  11. Classification and Sequential Pattern Analysis for Improving Managerial Efficiency and Providing Better Medical Service in Public Healthcare Centers

    PubMed Central

    Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo

    2010-01-01

    Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426

  12. Core courses in public health laboratory science and practice: findings from 2006 and 2011 surveys.

    PubMed

    DeBoy, John M; Beck, Angela J; Boulton, Matthew L; Kim, Deborah H; Wichman, Michael D; Luedtke, Patrick F

    2013-01-01

    We identified academic training courses or topics most important to the careers of U.S. public health, environmental, and agricultural laboratory (PHEAL) scientist-managers and directors, and determined what portions of the national PHEAL workforce completed these courses. We conducted electronic national surveys in 2006 and 2011, and analyzed data using numerical ranking, Chi-square tests comparing rates, and Spearman's formula measuring rank correlation. In 2006, 40 of 50 PHEAL directors identified 56 course topics as either important, useful, or not needed for someone in their position. These course topics were then ranked to provide a list of 31 core courses. In 2011, 1,659 of approximately 5,555 PHEAL scientific and technical staff, using a subset of 25 core courses, evidenced higher core course completion rates associated with higher-level job classification, advanced academic degree, and age. The 2011 survey showed that 287 PHEAL scientist-managers and directors, on average, completed 37.7% (n=5/13) of leadership/managerial core courses and 51.7% (n=6/12) of scientific core courses. For 1,659 laboratorians in all scientific and technical classifications, core-subject completion rates were higher in local laboratories (42.8%, n=11/25) than in state (36.0%, n=9/25), federal (34.4%, n=9/25), and university (31.2%, n=8/25) laboratories. There is a definable range of scientific, leadership, and managerial core courses needed by PHEAL scientist-managers and directors to function effectively in their positions. Potential PHEAL scientist-managers and directors need greater and continuing access to these courses, and academic and practice entities supporting development of this workforce should adopt curricula and core competencies aligned with these course topics.

  13. Core Courses in Public Health Laboratory Science and Practice: Findings from 2006 and 2011 Surveys

    PubMed Central

    Beck, Angela J.; Boulton, Matthew L.; Kim, Deborah H.; Wichman, Michael D.; Luedtke, Patrick F.

    2013-01-01

    Objectives We identified academic training courses or topics most important to the careers of U.S. public health, environmental, and agricultural laboratory (PHEAL) scientist-managers and directors, and determined what portions of the national PHEAL workforce completed these courses. Methods We conducted electronic national surveys in 2006 and 2011, and analyzed data using numerical ranking, Chi-square tests comparing rates, and Spearman's formula measuring rank correlation. Results In 2006, 40 of 50 PHEAL directors identified 56 course topics as either important, useful, or not needed for someone in their position. These course topics were then ranked to provide a list of 31 core courses. In 2011, 1,659 of approximately 5,555 PHEAL scientific and technical staff, using a subset of 25 core courses, evidenced higher core course completion rates associated with higher-level job classification, advanced academic degree, and age. The 2011 survey showed that 287 PHEAL scientist-managers and directors, on average, completed 37.7% (n=5/13) of leadership/managerial core courses and 51.7% (n=6/12) of scientific core courses. For 1,659 laboratorians in all scientific and technical classifications, core-subject completion rates were higher in local laboratories (42.8%, n=11/25) than in state (36.0%, n=9/25), federal (34.4%, n=9/25), and university (31.2%, n=8/25) laboratories. Conclusions There is a definable range of scientific, leadership, and managerial core courses needed by PHEAL scientist-managers and directors to function effectively in their positions. Potential PHEAL scientist-managers and directors need greater and continuing access to these courses, and academic and practice entities supporting development of this workforce should adopt curricula and core competencies aligned with these course topics. PMID:23997310

  14. Use of feature extraction techniques for the texture and context information in ERTS imagery: Spectral and textural processing of ERTS imagery. [classification of Kansas land use

    NASA Technical Reports Server (NTRS)

    Haralick, R. H. (Principal Investigator); Bosley, R. J.

    1974-01-01

    The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.

  15. Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms.

    PubMed

    Ozcift, Akin; Gulten, Arif

    2011-12-01

    Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  17. Towards a consensus on a hearing preservation classification system.

    PubMed

    Skarzynski, Henryk; van de Heyning, P; Agrawal, S; Arauz, S L; Atlas, M; Baumgartner, W; Caversaccio, M; de Bodt, M; Gavilan, J; Godey, B; Green, K; Gstoettner, W; Hagen, R; Han, D M; Kameswaran, M; Karltorp, E; Kompis, M; Kuzovkov, V; Lassaletta, L; Levevre, F; Li, Y; Manikoth, M; Martin, J; Mlynski, R; Mueller, J; O'Driscoll, M; Parnes, L; Prentiss, S; Pulibalathingal, S; Raine, C H; Rajan, G; Rajeswaran, R; Rivas, J A; Rivas, A; Skarzynski, P H; Sprinzl, G; Staecker, H; Stephan, K; Usami, S; Yanov, Y; Zernotti, M E; Zimmermann, K; Lorens, A; Mertens, G

    2013-01-01

    The comprehensive Hearing Preservation classification system presented in this paper is suitable for use for all cochlear implant users with measurable pre-operative residual hearing. If adopted as a universal reporting standard, as it was designed to be, it should prove highly beneficial by enabling future studies to quickly and easily compare the results of previous studies and meta-analyze their data. To develop a comprehensive Hearing Preservation classification system suitable for use for all cochlear implant users with measurable pre-operative residual hearing. The HEARRING group discussed and reviewed a number of different propositions of a HP classification systems and reviewed critical appraisals to develop a qualitative system in accordance with the prerequisites. The Hearing Preservation Classification System proposed herein fulfills the following necessary criteria: 1) classification is independent from users' initial hearing, 2) it is appropriate for all cochlear implant users with measurable pre-operative residual hearing, 3) it covers the whole range of pure tone average from 0 to 120 dB; 4) it is easy to use and easy to understand.

  18. Variability of undetermined manner of death classification in the US.

    PubMed

    Breiding, M J; Wiersema, B

    2006-12-01

    To better understand variations in classification of deaths of undetermined intent among states in the National Violent Death Reporting System (NVDRS). Data from the NVDRS and the National Vital Statistics System were used to compare differences among states. Percentages of deaths assigned undetermined intent, rates of deaths of undetermined intent, rates of fatal poisonings broken down by cause of death, composition of poison types within the undetermined-intent classification. Three states within NVDRS (Maryland, Massachusetts, and Rhode Island) evidenced increased numbers of deaths of undetermined intent. These same states exhibited high rates of undetermined death and, more specifically, high rates of undetermined poisoning deaths. Further, these three states evidenced correspondingly lower rates of unintentional poisonings. The types of undetermined poisonings present in these states, but not present in other states, are typically the result of a combination of recreational drugs, alcohol, or prescription drugs. The differing classification among states of many poisoning deaths has implications for the analysis of undetermined deaths within the NVDRS and for the examination of possible/probable suicides contained within the undetermined- or accidental-intent classifications. The NVDRS does not collect information on unintentional poisonings, so in most states data are not collected on these possible/probable suicides. The authors believe this is an opportunity missed to understand the full range of self-harm deaths in the greater detail provided by the NVDRS system. They advocate a broader interpretation of suicide to include the full continuum of deaths resulting from self-harm.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  20. Classification of large-scale fundus image data sets: a cloud-computing framework.

    PubMed

    Roychowdhury, Sohini

    2016-08-01

    Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.

  1. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  2. Use of data mining techniques to classify soil CO2 emission induced by crop management in sugarcane field.

    PubMed

    Farhate, Camila Viana Vieira; Souza, Zigomar Menezes de; Oliveira, Stanley Robson de Medeiros; Tavares, Rose Luiza Moraes; Carvalho, João Luís Nunes

    2018-01-01

    Soil CO2 emissions are regarded as one of the largest flows of the global carbon cycle and small changes in their magnitude can have a large effect on the CO2 concentration in the atmosphere. Thus, a better understanding of this attribute would enable the identification of promoters and the development of strategies to mitigate the risks of climate change. Therefore, our study aimed at using data mining techniques to predict the soil CO2 emission induced by crop management in sugarcane areas in Brazil. To do so, we used different variable selection methods (correlation, chi-square, wrapper) and classification (Decision tree, Bayesian models, neural networks, support vector machine, bagging with logistic regression), and finally we tested the efficiency of different approaches through the Receiver Operating Characteristic (ROC) curve. The original dataset consisted of 19 variables (18 independent variables and one dependent (or response) variable). The association between cover crop and minimum tillage are effective strategies to promote the mitigation of soil CO2 emissions, in which the average CO2 emissions are 63 kg ha-1 day-1. The variables soil moisture, soil temperature (Ts), rainfall, pH, and organic carbon were most frequently selected for soil CO2 emission classification using different methods for attribute selection. According to the results of the ROC curve, the best approaches for soil CO2 emission classification were the following: (I)-the Multilayer Perceptron classifier with attribute selection through the wrapper method, that presented rate of false positive of 13,50%, true positive of 94,20% area under the curve (AUC) of 89,90% (II)-the Bagging classifier with logistic regression with attribute selection through the Chi-square method, that presented rate of false positive of 13,50%, true positive of 94,20% AUC of 89,90%. However, the (I) approach stands out in relation to (II) for its higher positive class accuracy (high CO2 emission) and lower computational cost.

  3. Use of data mining techniques to classify soil CO2 emission induced by crop management in sugarcane field

    PubMed Central

    de Souza, Zigomar Menezes; Oliveira, Stanley Robson de Medeiros; Tavares, Rose Luiza Moraes; Carvalho, João Luís Nunes

    2018-01-01

    Soil CO2 emissions are regarded as one of the largest flows of the global carbon cycle and small changes in their magnitude can have a large effect on the CO2 concentration in the atmosphere. Thus, a better understanding of this attribute would enable the identification of promoters and the development of strategies to mitigate the risks of climate change. Therefore, our study aimed at using data mining techniques to predict the soil CO2 emission induced by crop management in sugarcane areas in Brazil. To do so, we used different variable selection methods (correlation, chi-square, wrapper) and classification (Decision tree, Bayesian models, neural networks, support vector machine, bagging with logistic regression), and finally we tested the efficiency of different approaches through the Receiver Operating Characteristic (ROC) curve. The original dataset consisted of 19 variables (18 independent variables and one dependent (or response) variable). The association between cover crop and minimum tillage are effective strategies to promote the mitigation of soil CO2 emissions, in which the average CO2 emissions are 63 kg ha-1 day-1. The variables soil moisture, soil temperature (Ts), rainfall, pH, and organic carbon were most frequently selected for soil CO2 emission classification using different methods for attribute selection. According to the results of the ROC curve, the best approaches for soil CO2 emission classification were the following: (I)–the Multilayer Perceptron classifier with attribute selection through the wrapper method, that presented rate of false positive of 13,50%, true positive of 94,20% area under the curve (AUC) of 89,90% (II)–the Bagging classifier with logistic regression with attribute selection through the Chi-square method, that presented rate of false positive of 13,50%, true positive of 94,20% AUC of 89,90%. However, the (I) approach stands out in relation to (II) for its higher positive class accuracy (high CO2 emission) and lower computational cost. PMID:29513765

  4. Body build classes as a method for systematization of age-related anthropometric changes in girls aged 7-8 and 17-18 years.

    PubMed

    Kasmel, Jaan; Kaarma, Helje; Koskel, Säde; Tiit, Ene-Margit

    2004-03-01

    A total of 462 schoolgirls aged 7-8 and 17-18 years were examined anthropometrically (45 body measurements and 10 skinfolds) in a cross-sectional study. The data were processed in two age groups: 7-8-year-olds (n = 205) and 17-18-year-olds (n = 257). Relying on average height and weight in the groups, both groups were divided into five body build classes: small, medium, large, pyknomorphous and leptomorphous. In these classes, the differences in all other body measurements were compared, and in both age groups, analogous systematic differences were found in length, width and depth measurements and circumferences. This enabled us to compare proportional changes in body measurements during ten years, using for this ratios of averages of basic measurements and measurement groups in the same body build classes. Statistical analysis by the sign test revealed statistically significant differences between various body build classes in the growth of averages. Girls belonging to the small class differed from the girls of the large class by an essentially greater increase in their measurements. Our results suggest that the growth rate of body measurements of girls with different body build can be studied by the help of body build classification.

  5. 40 CFR 86.085-20 - Incomplete vehicles, classification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., classification. (a) An incomplete truck less than 8,500 pounds gross vehicle weight rating shall be classified by... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Incomplete vehicles, classification... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES General...

  6. The relationship between twelve-month home stimulation and school achievement.

    PubMed

    van Doorninck, W J; Caldwell, B M; Wright, C; Frankenburg, W K

    1981-09-01

    Home Observation for Measurement of the Environment (HOME) was designed to reflect parental support of early cognitive and socioemotional development. 12-month HOME scores were correlated with elementary school achievement, 5--9 years later. 50 low-income children were rank ordered by a weighted average of centile estimates of achievement test scores, letter grades, and curriculum levels in reading and math. 24 children were classified as having significant school achievement problems. The HOME total score correlated significantly, r = .37, with school centile scores among the low-income families. The statistically more appropriate contingency table analysis revealed a 68% correct classification rate and a significantly reduced error rate over random or blanket prediction. The results supported the predictive value of the 12-month HOME for school achievement among low-income families. In an additional sample of 21 middle-income families, there was insufficient variability among HOME scores to allow prediction. The HOME total scores were highly correlated, r = .86, among siblings tested at least 10 months apart.

  7. Trends in Clinical Diagnoses of Rocky Mountain Spotted Fever among American Indians, 2001–2008

    PubMed Central

    Folkema, Arianne M.; Holman, Robert C.; McQuiston, Jennifer H.; Cheek, James E.

    2012-01-01

    American Indians are at greater risk for Rocky Mountain spotted fever (RMSF) than the general U.S. population. The epidemiology of RMSF among American Indians was examined by using Indian Health Service inpatient and outpatient records with an RMSF International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis. For 2001–2008, 958 American Indian patients with clinical diagnoses of RMSF were reported. The average annual RMSF incidence was 94.6 per 1,000,000 persons, with a significant increasing incidence trend from 24.2 in 2001 to 139.4 in 2008 (P = 0.006). Most (89%) RMSF hospital visits occurred in the Southern Plains and Southwest regions, where the average annual incidence rates were 277.2 and 49.4, respectively. Only the Southwest region had a significant increasing incidence trend (P = 0.005), likely linked to the emergence of brown dog ticks as an RMSF vector in eastern Arizona. It is important to continue monitoring RMSF infection to inform public health interventions that target RMSF reduction in high-risk populations. PMID:22232466

  8. Trends in clinical diagnoses of Rocky Mountain spotted fever among American Indians, 2001-2008.

    PubMed

    Folkema, Arianne M; Holman, Robert C; McQuiston, Jennifer H; Cheek, James E

    2012-01-01

    American Indians are at greater risk for Rocky Mountain spotted fever (RMSF) than the general U.S. population. The epidemiology of RMSF among American Indians was examined by using Indian Health Service inpatient and outpatient records with an RMSF International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis. For 2001-2008, 958 American Indian patients with clinical diagnoses of RMSF were reported. The average annual RMSF incidence was 94.6 per 1,000,000 persons, with a significant increasing incidence trend from 24.2 in 2001 to 139.4 in 2008 (P = 0.006). Most (89%) RMSF hospital visits occurred in the Southern Plains and Southwest regions, where the average annual incidence rates were 277.2 and 49.4, respectively. Only the Southwest region had a significant increasing incidence trend (P = 0.005), likely linked to the emergence of brown dog ticks as an RMSF vector in eastern Arizona. It is important to continue monitoring RMSF infection to inform public health interventions that target RMSF reduction in high-risk populations.

  9. Increasing Intelligence in Inter-Vehicle Communications to Reduce Traffic Congestions: Experiments in Urban and Highway Environments.

    PubMed

    Meneguette, Rodolfo I; Filho, Geraldo P R; Guidoni, Daniel L; Pessin, Gustavo; Villas, Leandro A; Ueyama, Jó

    2016-01-01

    Intelligent Transportation Systems (ITS) rely on Inter-Vehicle Communication (IVC) to streamline the operation of vehicles by managing vehicle traffic, assisting drivers with safety and sharing information, as well as providing appropriate services for passengers. Traffic congestion is an urban mobility problem, which causes stress to drivers and economic losses. In this context, this work proposes a solution for the detection, dissemination and control of congested roads based on inter-vehicle communication, called INCIDEnT. The main goal of the proposed solution is to reduce the average trip time, CO emissions and fuel consumption by allowing motorists to avoid congested roads. The simulation results show that our proposed solution leads to short delays and a low overhead. Moreover, it is efficient with regard to the coverage of the event and the distance to which the information can be propagated. The findings of the investigation show that the proposed solution leads to (i) high hit rate in the classification of the level of congestion, (ii) a reduction in average trip time, (iii) a reduction in fuel consumption, and (iv) reduced CO emissions.

  10. Increasing Intelligence in Inter-Vehicle Communications to Reduce Traffic Congestions: Experiments in Urban and Highway Environments

    PubMed Central

    Filho, Geraldo P. R.; Guidoni, Daniel L.; Pessin, Gustavo; Villas, Leandro A.; Ueyama, Jó

    2016-01-01

    Intelligent Transportation Systems (ITS) rely on Inter-Vehicle Communication (IVC) to streamline the operation of vehicles by managing vehicle traffic, assisting drivers with safety and sharing information, as well as providing appropriate services for passengers. Traffic congestion is an urban mobility problem, which causes stress to drivers and economic losses. In this context, this work proposes a solution for the detection, dissemination and control of congested roads based on inter-vehicle communication, called INCIDEnT. The main goal of the proposed solution is to reduce the average trip time, CO emissions and fuel consumption by allowing motorists to avoid congested roads. The simulation results show that our proposed solution leads to short delays and a low overhead. Moreover, it is efficient with regard to the coverage of the event and the distance to which the information can be propagated. The findings of the investigation show that the proposed solution leads to (i) high hit rate in the classification of the level of congestion, (ii) a reduction in average trip time, (iii) a reduction in fuel consumption, and (iv) reduced CO emissions PMID:27526048

  11. Using recorded sound spectra profile as input data for real-time short-term urban road-traffic-flow estimation.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2012-10-01

    Road traffic has a heavy impact on the urban sound environment, constituting the main source of noise and widely dominating its spectral composition. In this context, our research investigates the use of recorded sound spectra as input data for the development of real-time short-term road traffic flow estimation models. For this, a series of models based on the use of Multilayer Perceptron Neural Networks, multiple linear regression, and the Fisher linear discriminant were implemented to estimate road traffic flow as well as to classify it according to the composition of heavy vehicles and motorcycles/mopeds. In view of the results, the use of the 50-400 Hz and 1-2.5 kHz frequency ranges as input variables in multilayer perceptron-based models successfully estimated urban road traffic flow with an average percentage of explained variance equal to 86%, while the classification of the urban road traffic flow gave an average success rate of 96.1%. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    PubMed Central

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.

  13. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  14. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  15. Automotive System for Remote Surface Classification.

    PubMed

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  16. Automotive System for Remote Surface Classification

    PubMed Central

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-01-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297

  17. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    PubMed

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  18. Modelling and Analysis of the Excavation Phase by the Theory of Blocks Method of Tunnel 4 Kherrata Gorge, Algeria

    NASA Astrophysics Data System (ADS)

    Boukarm, Riadh; Houam, Abdelkader; Fredj, Mohammed; Boucif, Rima

    2017-12-01

    The aim of our work is to check the stability during excavation tunnel work in the rock mass of Kherrata, connecting the cities of Bejaia to Setif. The characterization methods through the Q system (method of Barton), RMR (Bieniawski classification) allowed us to conclude that the quality of rock mass is average in limestone, and poor in fractured limestone. Then modelling of excavation phase using the theory of blocks method (Software UNWEDGE) with the parameters from the recommendations of classification allowed us to check stability and to finally conclude that the use of geomechanical classification and the theory of blocks can be considered reliable in preliminary design.

  19. Correlation-based pattern recognition for implantable defibrillators.

    PubMed Central

    Wilkins, J.

    1996-01-01

    An estimated 300,000 Americans die each year from cardiac arrhythmias. Historically, drug therapy or surgery were the only treatment options available for patients suffering from arrhythmias. Recently, implantable arrhythmia management devices have been developed. These devices allow abnormal cardiac rhythms to be sensed and corrected in vivo. Proper arrhythmia classification is critical to selecting the appropriate therapeutic intervention. The classification problem is made more challenging by the power/computation constraints imposed by the short battery life of implantable devices. Current devices utilize heart rate-based classification algorithms. Although easy to implement, rate-based approaches have unacceptably high error rates in distinguishing supraventricular tachycardia (SVT) from ventricular tachycardia (VT). Conventional morphology assessment techniques used in ECG analysis often require too much computation to be practical for implantable devices. In this paper, a computationally-efficient, arrhythmia classification architecture using correlation-based morphology assessment is presented. The architecture classifies individuals heart beats by assessing similarity between an incoming cardiac signal vector and a series of prestored class templates. A series of these beat classifications are used to make an overall rhythm assessment. The system makes use of several new results in the field of pattern recognition. The resulting system achieved excellent accuracy in discriminating SVT and VT. PMID:8947674

  20. Risk factors and classification of stillbirth in a Middle Eastern population: a retrospective study.

    PubMed

    Kunjachen Maducolil, Mariam; Abid, Hafsa; Lobo, Rachael Marian; Chughtai, Ambreen Qayyum; Afzal, Arjumand Muhammad; Saleh, Huda Abdullah Hussain; Lindow, Stephen W

    2017-12-21

    To estimate the incidence of stillbirth, explore the associated maternal and fetal factors and to evaluate the most appropriate classification of stillbirth for a multiethnic population. This is a retrospective population-based study of stillbirth in a large tertiary unit. Data of each stillbirth with a gestational age >/=24 weeks in the year 2015 were collected from electronic medical records and analyzed. The stillbirth rate for our multiethnic population is 7.81 per 1000 births. Maternal medical factors comprised 52.4% in which the rates of hypertensive disorders, diabetes and other medical disorders were 22.5%, 20.8% and 8.3%, respectively. The most common fetal factor was intrauterine growth restriction (IUGR) (22.5%) followed by congenital anomalies (21.6%). All cases were categorized using the Wigglesworth, Aberdeen, Tulip, ReCoDe and International Classification of Diseases-perinatal mortality (ICD-PM) classifications and the rates of unclassified stillbirths were 59.2%, 46.6%, 16.6%, 11.6% and 7.5%, respectively. An autopsy was performed in 9.1% of cases reflecting local religious and cultural sensitivities. This study highlighted the modifiable risk factors among the Middle Eastern population. The most appropriate classification was the ICD-PM. The low rates of autopsy prevented a detailed evaluation of stillbirths, therefore it is suggested that a minimally invasive autopsy [postmortem magnetic resonance imaging (MRI)] may improve the quality of care.

  1. A fast image retrieval method based on SVM and imbalanced samples in filtering multimedia message spam

    NASA Astrophysics Data System (ADS)

    Chen, Zhang; Peng, Zhenming; Peng, Lingbing; Liao, Dongyi; He, Xin

    2011-11-01

    With the swift and violent development of the Multimedia Messaging Service (MMS), it becomes an urgent task to filter the Multimedia Message (MM) spam effectively in real-time. For the fact that most MMs contain images or videos, a method based on retrieving images is given in this paper for filtering MM spam. The detection method used in this paper is a combination of skin-color detection, texture detection, and face detection, and the classifier for this imbalanced problem is a very fast multi-classification combining Support vector machine (SVM) with unilateral binary decision tree. The experiments on 3 test sets show that the proposed method is effective, with the interception rate up to 60% and the average detection time for each image less than 1 second.

  2. Quantitative ultrasound assessment of breast tumor response to chemotherapy using a multi-parameter approach

    PubMed Central

    Tadayyon, Hadi; Sannachi, Lakshmanan; Gangeh, Mehrdad; Sadeghi-Naini, Ali; Tran, William; Trudeau, Maureen E.; Pritchard, Kathleen; Ghandi, Sonal; Verma, Sunil; Czarnota, Gregory J.

    2016-01-01

    Purpose This study demonstrated the ability of quantitative ultrasound (QUS) parameters in providing an early prediction of tumor response to neoadjuvant chemotherapy (NAC) in patients with locally advanced breast cancer (LABC). Methods Using a 6-MHz array transducer, ultrasound radiofrequency (RF) data were collected from 58 LABC patients prior to NAC treatment and at weeks 1, 4, and 8 of their treatment, and prior to surgery. QUS parameters including midband fit (MBF), spectral slope (SS), spectral intercept (SI), spacing among scatterers (SAS), attenuation coefficient estimate (ACE), average scatterer diameter (ASD), and average acoustic concentration (AAC) were determined from the tumor region of interest. Ultrasound data were compared with the ultimate clinical and pathological response of the patient's tumor to treatment and patient recurrence-free survival. Results Multi-parameter discriminant analysis using the κ-nearest-neighbor classifier demonstrated that the best response classification could be achieved using the combination of MBF, SS, and SAS, with an accuracy of 60 ± 10% at week 1, 77 ± 8% at week 4 and 75 ± 6% at week 8. Furthermore, when the QUS measurements at each time (week) were combined with pre-treatment (week 0) QUS values, the classification accuracies improved (70 ± 9% at week 1, 80 ± 5% at week 4, and 81 ± 6% at week 8). Finally, the multi-parameter QUS model demonstrated a significant difference in survival rates of responding and non-responding patients at weeks 1 and 4 (p=0.035, and 0.027, respectively). Conclusion This study demonstrated for the first time, using new parameters tested on relatively large patient cohort and leave-one-out classifier evaluation, that a hybrid QUS biomarker including MBF, SS, and SAS could, with relatively high sensitivity and specificity, detect the response of LABC tumors to NAC as early as after 4 weeks of therapy. The findings of this study also suggested that incorporating pre-treatment QUS parameters of a tumor improved the classification results. This work demonstrated the potential of QUS and machine learning methods for the early assessment of breast tumor response to NAC and providing personalized medicine with regards to the treatment planning of refractory patients. PMID:27105515

  3. Quantitative ultrasound assessment of breast tumor response to chemotherapy using a multi-parameter approach.

    PubMed

    Tadayyon, Hadi; Sannachi, Lakshmanan; Gangeh, Mehrdad; Sadeghi-Naini, Ali; Tran, William; Trudeau, Maureen E; Pritchard, Kathleen; Ghandi, Sonal; Verma, Sunil; Czarnota, Gregory J

    2016-07-19

    This study demonstrated the ability of quantitative ultrasound (QUS) parameters in providing an early prediction of tumor response to neoadjuvant chemotherapy (NAC) in patients with locally advanced breast cancer (LABC). Using a 6-MHz array transducer, ultrasound radiofrequency (RF) data were collected from 58 LABC patients prior to NAC treatment and at weeks 1, 4, and 8 of their treatment, and prior to surgery. QUS parameters including midband fit (MBF), spectral slope (SS), spectral intercept (SI), spacing among scatterers (SAS), attenuation coefficient estimate (ACE), average scatterer diameter (ASD), and average acoustic concentration (AAC) were determined from the tumor region of interest. Ultrasound data were compared with the ultimate clinical and pathological response of the patient's tumor to treatment and patient recurrence-free survival. Multi-parameter discriminant analysis using the κ-nearest-neighbor classifier demonstrated that the best response classification could be achieved using the combination of MBF, SS, and SAS, with an accuracy of 60 ± 10% at week 1, 77 ± 8% at week 4 and 75 ± 6% at week 8. Furthermore, when the QUS measurements at each time (week) were combined with pre-treatment (week 0) QUS values, the classification accuracies improved (70 ± 9% at week 1, 80 ± 5% at week 4, and 81 ± 6% at week 8). Finally, the multi-parameter QUS model demonstrated a significant difference in survival rates of responding and non-responding patients at weeks 1 and 4 (p=0.035, and 0.027, respectively). This study demonstrated for the first time, using new parameters tested on relatively large patient cohort and leave-one-out classifier evaluation, that a hybrid QUS biomarker including MBF, SS, and SAS could, with relatively high sensitivity and specificity, detect the response of LABC tumors to NAC as early as after 4 weeks of therapy. The findings of this study also suggested that incorporating pre-treatment QUS parameters of a tumor improved the classification results. This work demonstrated the potential of QUS and machine learning methods for the early assessment of breast tumor response to NAC and providing personalized medicine with regards to the treatment planning of refractory patients.

  4. A Preliminary Statistical Investigation into the Impace of an N-Gram Analysis Approach Based on World Syntactic Categories Toward Text Author Classification

    DTIC Science & Technology

    2000-06-01

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...words: N-gram, Shakespeare , Middleton, Wardigo, Funeral Elegy, Author Classification Introduction Literary experts refer to the style of the ...signed W. S. at the time of the investigation; the jury was still out as to the identity of its author. It has been noted as of late

  5. Derivative spectra matching for wetland vegetation identification and classification by hyperspectral image

    NASA Astrophysics Data System (ADS)

    Wang, Jinnian; Zheng, Lanfen; Tong, Qingxi

    1998-08-01

    In this paper, we reported some research result in applying hyperspectral remote sensing data in identification and classification of wetland plant species and associations. Hyperspectral data were acquired by Modular Airborne Imaging Spectrometer (MAIS) over Poyang Lake wetland, China. A derivative spectral matching algorithm was used in hyperspectral vegetation analysis. The field measurement spectra were as reference for derivative spectral matching. In the study area, seven wetland plant associations were identified and classified with overall average accuracy is 84.03%.

  6. Comparing classification methods for diffuse reflectance spectra to improve tissue specific laser surgery.

    PubMed

    Engelhardt, Alexander; Kanawade, Rajesh; Knipfer, Christian; Schmid, Matthias; Stelzle, Florian; Adler, Werner

    2014-07-16

    In the field of oral and maxillofacial surgery, newly developed laser scalpels have multiple advantages over traditional metal scalpels. However, they lack haptic feedback. This is dangerous near e.g. nerve tissue, which has to be preserved during surgery. One solution to this problem is to train an algorithm that analyzes the reflected light spectra during surgery and can classify these spectra into different tissue types, in order to ultimately send a warning or temporarily switch off the laser when critical tissue is about to be ablated. Various machine learning algorithms are available for this task, but a detailed analysis is needed to assess the most appropriate algorithm. In this study, a small data set is used to simulate many larger data sets according to a multivariate Gaussian distribution. Various machine learning algorithms are then trained and evaluated on these data sets. The algorithms' performance is subsequently evaluated and compared by averaged confusion matrices and ultimately by boxplots of misclassification rates. The results are validated on the smaller, experimental data set. Most classifiers have a median misclassification rate below 0.25 in the simulated data. The most notable performance was observed for the Penalized Discriminant Analysis, with a misclassifiaction rate of 0.00 in the simulated data, and an average misclassification rate of 0.02 in a 10-fold cross validation on the original data. The results suggest a Penalized Discriminant Analysis is the most promising approach, most probably because it considers the functional, correlated nature of the reflectance spectra.The results of this study improve the accuracy of real-time tissue discrimination and are an essential step towards improving the safety of oral laser surgery.

  7. Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification

    DTIC Science & Technology

    quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.

  8. 78 FR 15377 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Requests To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-11

    ... for OMB Review; Comment Request; Requests To Approve Conformed Wage Classifications and Unconventional... Classifications and Unconventional Fringe Benefit Plans Under the Davis-Bacon and Related Acts and Contract Work... collection consist of: (A) Reports of conformed classifications and wage rates and (B) requests for approval...

  9. 48 CFR 52.222-6 - Davis-Bacon Act.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... defined in paragraph (a)(1)(i), or the “secondary site of the work” as defined in paragraph (a)(1)(ii) of... the classification of work actually performed, without regard to skill, except as provided in the... classification may be compensated at the rate specified for each classification for the time actually worked...

  10. 48 CFR 52.222-6 - Davis-Bacon Act.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... defined in paragraph (a)(1)(i), or the “secondary site of the work” as defined in paragraph (a)(1)(ii) of... the classification of work actually performed, without regard to skill, except as provided in the... classification may be compensated at the rate specified for each classification for the time actually worked...

  11. [Application of rafting K-wire technique for tibial plateau fractures].

    PubMed

    Zhang, Xing-zhou; Yu, Wei-zhong; Li, Yun-feng; Liu, Yan-hui

    2015-12-01

    To summarize application of rafting K-wires technique for tibial plateau fractures. From January 2013 to January 2015,45 patients with tibial plateau fractures were treated by locking plate with rafting K-wires, including 33 males and 12 females with an average of 44.2 years old ranging from 22 to 56 years old. According to Schatzker classification, 6 cases were type II, 8 were type Ill, 4 were type IV, 4 were type V, and 5 were type VI. Allogeneic bone graft were performed for bone defects. All patients were fixed with two to five K-wires. Part of weight loading were encouraged at 3 months after operation,and full weight-loading were done at 5 months after operation. Postoperative complications were observed,and Rasmussen clinical and radiological assessment were used to evaluate clinical results. All Patients were followed up from 10 to 23 months with average of 14 months. According to Rasmussen clinical and radiological assessment, clinical scores 23.58 ± 6.33, radiological scores were 14.00 ± 6.33; and excellent and good rates were 82.2% and 77.8% respectively. Four patients occurred severe osteoporosis and collapse of articular surface; 5 patients occurred traumatic arthritis. Rafting K-wires technique with anatomized armor plate could effective fix and support platform collapse and joint bone fragments, increase support surface area and reduce postoperative reduction loss rate.

  12. St. Louis demonstration final report: refuse processing plant equipment, facilities, and environmental evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiscus, D.E.; Gorman, P.G.; Schrag, M.P.

    1977-09-01

    The results are presented of processing plant evaluations of the St. Louis-Union Electric Refuse Fuel Project, including equipment and facilities as well as assessment of environmental emissions at both the processing and the power plants. Data on plant material flows and operating parameters, plant operating costs, characteristics of plant material flows, and emissions from various processing operations were obtained during a testing program encompassing 53 calendar weeks. Refuse derived fuel (RDF) is the major product (80.6% by weight) of the refuse processing plant, the other being ferrous metal scrap, a marketable by-product. Average operating costs for the entire evaluation periodmore » were $8.26/Mg ($7.49/ton). The average overall processing rate for the period was 168 Mg/8-h day (185.5 tons/8-h day) at 31.0 Mg/h (34.2 tons/h). Future plants using an air classification system of the type used at the St. Louis demonstration plant will need an emissions control device for particulates from the large de-entrainment cyclone. Also in the air exhaust from the cyclone were total counts of bacteria and viruses several times higher than those of suburban ambient air. No water effluent or noise exposure problems were encountered, although landfill leachate mixed with ground water could result in contamination, given low dilution rates.« less

  13. Gait Phase Recognition for Lower-Limb Exoskeleton with Only Joint Angular Sensors

    PubMed Central

    Liu, Du-Xin; Wu, Xinyu; Du, Wenbin; Wang, Can; Xu, Tiantian

    2016-01-01

    Gait phase is widely used for gait trajectory generation, gait control and gait evaluation on lower-limb exoskeletons. So far, a variety of methods have been developed to identify the gait phase for lower-limb exoskeletons. Angular sensors on lower-limb exoskeletons are essential for joint closed-loop controlling; however, other types of sensors, such as plantar pressure, attitude or inertial measurement unit, are not indispensable.Therefore, to make full use of existing sensors, we propose a novel gait phase recognition method for lower-limb exoskeletons using only joint angular sensors. The method consists of two procedures. Firstly, the gait deviation distances during walking are calculated and classified by Fisher’s linear discriminant method, and one gait cycle is divided into eight gait phases. The validity of the classification results is also verified based on large gait samples. Secondly, we build a gait phase recognition model based on multilayer perceptron and train it with the phase-labeled gait data. The experimental result of cross-validation shows that the model has a 94.45% average correct rate of set (CRS) and an 87.22% average correct rate of phase (CRP) on the testing set, and it can predict the gait phase accurately. The novel method avoids installing additional sensors on the exoskeleton or human body and simplifies the sensory system of the lower-limb exoskeleton. PMID:27690023

  14. Voice based gender classification using machine learning

    NASA Astrophysics Data System (ADS)

    Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.

    2017-11-01

    Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.

  15. Cesarean Section Rate Analysis in University Hospital Tuzla - According to Robson's Classification.

    PubMed

    Fatusic, Jasenko; Hudic, Igor; Fatusic, Zlatan; Zildzic-Moralic, Aida; Zivkovic, Milorad

    2016-06-01

    For last decades, there has public concern about increasing Cesarean Section (CS) rates, and it is an issue of international public health concern. According to World Health Organisation (WHO) there is no justification to have more than 10-15% CS births. WHO proposes the Robson ten-group classification, as a global standard for assessing, monitoring and comparing cesarean section rates. The aim of this study was to investigate Cesarean section rate at University Hospital Tuzla, Bosnia and Herzegovina. Cross sectional study was conducted for one-year period, 2015. Statistical analysis and graph-table presentation was performed using Excel 2010 and Microsoft Office programs. Out of 3,672 births, a total of 936 births were performed by CS. Percentage of the total number of CS to the total birth number was 25,47%. According to Robson classification, the largest was group 5 with relative contribution of 29,80%. On second and third place were group 1 and 2 with relative contribution of 26,06% and 15,78% respectively. Groups 1, 2, 5 made account of realtive contribution of 71,65%. All other groups had entirely relative contribution of 28,35%. Robson 10-group classification provides easy way in collecting information about CS rate. It is important that efforts to reduce the overall CS rate should focus on reducing the primary CS. Data from our study confirm this attitude.

  16. Mechanisms of starch digestion by α-amylase-Structural basis for kinetic properties.

    PubMed

    Dhital, Sushil; Warren, Frederick J; Butterworth, Peter J; Ellis, Peter R; Gidley, Michael J

    2017-03-24

    Recent studies of the mechanisms determining the rate and extent of starch digestion by α-amylase are reviewed in the light of current widely-used classifications for (a) the proportions of rapidly-digestible (RDS), slowly-digestible (SDS), and resistant starch (RS) based on in vitro digestibility, and (b) the types of resistant starch (RS 1,2,3,4…) based on physical and/or chemical form. Based on methodological advances and new mechanistic insights, it is proposed that both classification systems should be modified. Kinetic analysis of digestion profiles provides a robust set of parameters that should replace the classification of starch as a combination of RDS, SDS, and RS from a single enzyme digestion experiment. This should involve determination of the minimum number of kinetic processes needed to describe the full digestion profile, together with the proportion of starch involved in each process, and the kinetic properties of each process. The current classification of resistant starch types as RS1,2,3,4 should be replaced by one which recognizes the essential kinetic nature of RS (enzyme digestion rate vs. small intestinal passage rate), and that there are two fundamental origins for resistance based on (i) rate-determining access/binding of enzyme to substrate and (ii) rate-determining conversion of substrate to product once bound.

  17. [Application of damage control concept in severe limbs fractures combining with multiple trauma].

    PubMed

    Bayin, Er-gu-le; Jin, Hong-bing; Li, Ming

    2015-09-01

    To discuss the application and clinical effect of damage control concept in the treatment of severe limbs fractures combining with multiple trauma. From July 2009 to July 2012, 30 patients with severe limbs fractures combining with multiple trauma were treated with the damage control concept, included 20 males and 10 females with an average age of (34.03 ± 12.81) years old ranging from 20 to 60 years old; the ISS averaged (35.00 ± 12.81) points (ranged from 26 to 54 points). And the control group also contained 30 patients with severe limbs fractures combining with multiple trauma treated by the traditional operation from June 2006 to June 2009, there were 23 males and 7 females with an average age of (34.23 ± 11.04) years old ranging from 18 to 65 years old. The ISS averaged (35.56 ± 11.04) points (ranged from 26 to 51 points). The age, gender, ISS, Gustilo classification, operation time, intraoperative blood loss, blood transfusion,postoperative complications and mortality rate were observed and compared. In the damage control concept group,there were 28 cases surviving and 2 cases (6.7%) death; 6 cases of postoperative complication included 2 cases of adult respiratory distress syndrome, 1 case of multiple organ failure, 1 case of disseminated intravascular coagulation and 2 cases of wound infection. In the control group, there were 22 cases surviving and 8 cases death(26.7%); 13 cases of postoperative complication included 4 cases of adult respiratory distress syndrome,2 cases of multiple organ failure, 2 cases of disseminated intravascular coagulation and 3 cases of wound infection. There were no statistically significant differences between two groups in age, gender, ISS, Gustilo classfication and complication (P > 0.05), however there were statistically significant differences in mortality rate, operation time, blodd loss, blodd transfusion between two groups (P < 0.05). Damage control concept is used to treat severe limbs fractures combining with multiple trauma which has the rapid and effective therapy, can improve survival rate and reduce complication.

  18. Individually adapted imagery improves brain-computer interface performance in end-users with disability.

    PubMed

    Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V C; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R

    2015-01-01

    Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage.

  19. Kinematic Analysis of Javelin Throw Performed by Wheelchair Athletes of Different Functional Classes

    PubMed Central

    Chow, John W.; Kuenster, Ann F.; Lim, Young-tae

    2003-01-01

    The purpose of this study was to identify those kinematic characteristics that are most closely related to the functional classification of a wheelchair athlete and measured distance of a javelin throw. Two S-VHS camcorders (60 field·s-1) were used to record the performance of 15 males of different classes. Each subject performed 6-10 throws and the best two legal throws from each subject were selected for analysis. Three-dimensional kinematics of the javelin and upper body segments at the instant of release and during the throw (delivery) were determined. The selection of kinematic parameters that were analyzed in this study was based on a javelin throw model showing the factors that determine the measured distance of a throw. The average of two throws for each subject was used to compute Spearman rank correlation coefficients between selected parameters and measured distance, and between selected parameters and the functional classification. The speeds and angles of the javelin at release, ranged from 9.1 to 14.7 m·s-1 and 29.6 to 35.8°, respectively, were smaller than those exhibited by elite male able-bodied throwers. As expected, the speed of the javelin at release was significantly correlated to both the classification (p<0.01) and measured distance (p<0.001). Of the segmental kinematic parameters, significant correlations were found between the trunk inclination at release and classification and between the angular speed at release and measured distance (p<0.01 for both). The angular speed of the shoulder girdle at release and the average angular speeds of the shoulder girdle during the delivery were significantly correlated to both the classification and measured distance (p<0.05). The results indicate that shoulder girdle movement during the delivery is an important determinant of classification and measured distance. PMID:24616609

  20. Individually Adapted Imagery Improves Brain-Computer Interface Performance in End-Users with Disability

    PubMed Central

    Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V. C.; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R.

    2015-01-01

    Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage. PMID:25992718

  1. Which catchment characteristics control the temporal dependence structure of daily river flows?

    NASA Astrophysics Data System (ADS)

    Chiverton, Andrew; Hannaford, Jamie; Holman, Ian; Corstanje, Ron; Prudhomme, Christel; Bloomfield, John; Hess, Tim

    2014-05-01

    A hydrological classification system would provide information about the dominant processes in the catchment enabling information to be transferred between catchments. Currently there is no widely-agreed upon system for classifying river catchments. This paper developed a novel approach to assess the influence that catchment characteristics have on the precipitation-to-flow relationship, using a catchment classification based on the average temporal dependence structure in daily river flow data over the period 1980 to 2010. Temporal dependence in river flow data is driven by the flow pathways, connectivity and storage within the catchment. Temporal dependence was analysed by creating temporally averaged semi-variograms for a set of 116 near-natural catchments (in order to prevent direct anthropogenic disturbances influencing the results) distributed throughout the UK. Cluster analysis, using the variogram, classified the catchments into four well defined clusters driven by the interaction of catchment characteristics, predominantly characteristics which influence the precipitation-to-flow relationship. Geology, depth to gleyed layer in soils, slope of the catchment and the percentage of arable land were significantly different between the clusters. These characteristics drive the temporal dependence structure by influencing the rate at which water moves through the catchment and / or the storage in the catchment. Arable land is correlated with several other variables, hence is a proxy indicating the residence time of the water in the catchment. Finally, quadratic discriminant analysis was used to show that a model with five catchment characteristics is able to predict the temporal dependence structure for un-gauged catchments. This work demonstrates that a variogram-based approach is a powerful and flexible methodology for grouping catchments based on the precipitation-to-flow relationship which could be applied to any set of catchments with a relatively complete daily river flow record.

  2. Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.

    2017-09-01

    The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16  points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.

  3. A Framework of Temporal-Spatial Descriptors-Based Feature Extraction for Improved Myoelectric Pattern Recognition.

    PubMed

    Khushaba, Rami N; Al-Timemy, Ali H; Al-Ani, Ahmed; Al-Jumaily, Adel

    2017-10-01

    The extraction of the accurate and efficient descriptors of muscular activity plays an important role in tackling the challenging problem of myoelectric control of powered prostheses. In this paper, we present a new feature extraction framework that aims to give an enhanced representation of muscular activities through increasing the amount of information that can be extracted from individual and combined electromyogram (EMG) channels. We propose to use time-domain descriptors (TDDs) in estimating the EMG signal power spectrum characteristics; a step that preserves the computational power required for the construction of spectral features. Subsequently, TDD is used in a process that involves: 1) representing the temporal evolution of the EMG signals by progressively tracking the correlation between the TDD extracted from each analysis time window and a nonlinearly mapped version of it across the same EMG channel and 2) representing the spatial coherence between the different EMG channels, which is achieved by calculating the correlation between the TDD extracted from the differences of all possible combinations of pairs of channels and their nonlinearly mapped versions. The proposed temporal-spatial descriptors (TSDs) are validated on multiple sparse and high-density (HD) EMG data sets collected from a number of intact-limbed and amputees performing a large number of hand and finger movements. Classification results showed significant reductions in the achieved error rates in comparison to other methods, with the improvement of at least 8% on average across all subjects. Additionally, the proposed TSDs achieved significantly well in problems with HD-EMG with average classification errors of <5% across all subjects using windows lengths of 50 ms only.

  4. Recognition of skin melanoma through dermoscopic image analysis

    NASA Astrophysics Data System (ADS)

    Gómez, Catalina; Herrera, Diana Sofia

    2017-11-01

    Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.

  5. Forecasting Daily Volume and Acuity of Patients in the Emergency Department.

    PubMed

    Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.

  6. Forecasting Daily Volume and Acuity of Patients in the Emergency Department

    PubMed Central

    Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842

  7. Classification of EEG signals using a genetic-based machine learning classifier.

    PubMed

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  8. Mortality with musculoskeletal disorders as underlying cause in Sweden 1997-2013: a time trend aggregate level study.

    PubMed

    Kiadaliri, Aliasghar A; Englund, Martin

    2016-04-14

    The aim was to assess time trend of mortality with musculoskeletal disorders (MSD) as underlying cause of death in Sweden from 1997 to 2013. We obtained data on MSD as underlying cause of death across age and sex groups from the National Board of Health and Welfare's Cause of Death Register. Age-standardized mortality rates per million population for all MSD, its six major subgroups, and all other ICD-10 (International Classification of Disease) chapters were calculated. We computed the average annual percent change (AAPC) in the mortality rates across age/sex groups using joinpoint regression analysis by fitting a regression line to the natural logarithm of the age-standardized mortality rates and calendar year as a predictor. There were a total of 7 976 deaths (0.5% of all causes deaths) with MSD as the underlying cause of death (32.5% of these deaths caused by rheumatoid arthritis [RA]). The overall age-standardized mortality rates (95% CI) were 16.0 (15.4 to 16.7) and 24.9 (24.1 to 25.7) per million among men and women, respectively (women/men rate ratio 1.55; 95%CI 1.47 to 1.63). On average, mortality rate declined by 2.3% per year and only circulatory system mortality had a more favourable decline than mortality with MSD as underlying cause. Among MSD the highest decline was observed in RA (3.7% per year) during study period. Across age groups, while there were generally stable or declining trends, spondylopathies and osteoporosis mortality among people ≥ 75 years increased by 2 and 1.5% per year, respectively. In overall, mortality with MSD as underlying cause has declined in Sweden over last two decades, with the highest decline for RA. However, there are variations across MSD subgroups which warrants further investigations.

  9. Two Approaches to Estimation of Classification Accuracy Rate under Item Response Theory

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2013-01-01

    Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…

  10. Rt-Space: A Real-Time Stochastically-Provisioned Adaptive Container Environment

    DTIC Science & Technology

    2017-08-04

    SECURITY CLASSIFICATION OF: This project was directed at component-based soft real- time (SRT) systems implemented on multicore platforms. To facilitate...upon average-case or near- average-case task execution times . The main intellectual contribution of this project was the development of methods for...allocating CPU time to components and associated analysis for validating SRT correctness. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13

  11. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  12. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  13. Efficacy measures associated to a plantar pressure based classification system in diabetic foot medicine.

    PubMed

    Deschamps, Kevin; Matricali, Giovanni Arnoldo; Desmet, Dirk; Roosen, Philip; Keijsers, Noel; Nobels, Frank; Bruyninckx, Herman; Staes, Filip

    2016-09-01

    The concept of 'classification' has, similar to many other diseases, been found to be fundamental in the field of diabetic medicine. In the current study, we aimed at determining efficacy measures of a recently published plantar pressure based classification system. Technical efficacy of the classification system was investigated by applying a high resolution, pixel-level analysis on the normalized plantar pressure pedobarographic fields of the original experimental dataset consisting of 97 patients with diabetes and 33 persons without diabetes. Clinical efficacy was assessed by considering the occurence of foot ulcers at the plantar aspect of the forefoot in this dataset. Classification efficacy was assessed by determining the classification recognition rate as well as its sensitivity and specificity using cross-validation subsets of the experimental dataset together with a novel cohort of 12 patients with diabetes. Pixel-level comparison of the four groups associated to the classification system highlighted distinct regional differences. Retrospective analysis showed the occurence of eleven foot ulcers in the experimental dataset since their gait analysis. Eight out of the eleven ulcers developed in a region of the foot which had the highest forces. Overall classification recognition rate exceeded 90% for all cross-validation subsets. Sensitivity and specificity of the four groups associated to the classification system exceeded respectively the 0.7 and 0.8 level in all cross-validation subsets. The results of the current study support the use of the novel plantar pressure based classification system in diabetic foot medicine. It may particularly serve in communication, diagnosis and clinical decision making. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Three-Way Analysis of Spectrospatial Electromyography Data: Classification and Interpretation

    PubMed Central

    Kauppi, Jukka-Pekka; Hahne, Janne; Müller, Klaus-Robert; Hyvärinen, Aapo

    2015-01-01

    Classifying multivariate electromyography (EMG) data is an important problem in prosthesis control as well as in neurophysiological studies and diagnosis. With modern high-density EMG sensor technology, it is possible to capture the rich spectrospatial structure of the myoelectric activity. We hypothesize that multi-way machine learning methods can efficiently utilize this structure in classification as well as reveal interesting patterns in it. To this end, we investigate the suitability of existing three-way classification methods to EMG-based hand movement classification in spectrospatial domain, as well as extend these methods by sparsification and regularization. We propose to use Fourier-domain independent component analysis as preprocessing to improve classification and interpretability of the results. In high-density EMG experiments on hand movements across 10 subjects, three-way classification yielded higher average performance compared with state-of-the art classification based on temporal features, suggesting that the three-way analysis approach can efficiently utilize detailed spectrospatial information of high-density EMG. Phase and amplitude patterns of features selected by the classifier in finger-movement data were found to be consistent with known physiology. Thus, our approach can accurately resolve hand and finger movements on the basis of detailed spectrospatial information, and at the same time allows for physiological interpretation of the results. PMID:26039100

  15. Combining various types of classifiers and features extracted from magnetic resonance imaging data in schizophrenia recognition.

    PubMed

    Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas

    2015-06-30

    We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Influence of Texture and Colour in Breast TMA Classification

    PubMed Central

    Fernández-Carrobles, M. Milagro; Bueno, Gloria; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial; González-López, Lucía

    2015-01-01

    Breast cancer diagnosis is still done by observation of biopsies under the microscope. The development of automated methods for breast TMA classification would reduce diagnostic time. This paper is a step towards the solution for this problem and shows a complete study of breast TMA classification based on colour models and texture descriptors. The TMA images were divided into four classes: i) benign stromal tissue with cellularity, ii) adipose tissue, iii) benign and benign anomalous structures, and iv) ductal and lobular carcinomas. A relevant set of features was obtained on eight different colour models from first and second order Haralick statistical descriptors obtained from the intensity image, Fourier, Wavelets, Multiresolution Gabor, M-LBP and textons descriptors. Furthermore, four types of classification experiments were performed using six different classifiers: (1) classification per colour model individually, (2) classification by combination of colour models, (3) classification by combination of colour models and descriptors, and (4) classification by combination of colour models and descriptors with a previous feature set reduction. The best result shows an average of 99.05% accuracy and 98.34% positive predictive value. These results have been obtained by means of a bagging tree classifier with combination of six colour models and the use of 1719 non-correlated (correlation threshold of 97%) textural features based on Statistical, M-LBP, Gabor and Spatial textons descriptors. PMID:26513238

  17. A systematic review of the Robson classification for caesarean section: what works, doesn't work and how to improve it.

    PubMed

    Betrán, Ana Pilar; Vindevoghel, Nadia; Souza, Joao Paulo; Gülmezoglu, A Metin; Torloni, Maria Regina

    2014-01-01

    Caesarean sections (CS) rates continue to increase worldwide without a clear understanding of the main drivers and consequences. The lack of a standardized internationally-accepted classification system to monitor and compare CS rates is one of the barriers to a better understanding of this trend. The Robson's 10-group classification is based on simple obstetrical parameters (parity, previous CS, gestational age, onset of labour, fetal presentation and number of fetuses) and does not involve the indication for CS. This classification has become very popular over the last years in many countries. We conducted a systematic review to synthesize the experience of users on the implementation of this classification and proposed adaptations. Four electronic databases were searched. A three-step thematic synthesis approach and a qualitative metasummary method were used. 232 unique reports were identified, 97 were selected for full-text evaluation and 73 were included. These publications reported on the use of Robson's classification in over 33 million women from 31 countries. According to users, the main strengths of the classification are its simplicity, robustness, reliability and flexibility. However, missing data, misclassification of women and lack of definition or consensus on core variables of the classification are challenges. To improve the classification for local use and to decrease heterogeneity within groups, several subdivisions in each of the 10 groups have been proposed. Group 5 (women with previous CS) received the largest number of suggestions. The use of the Robson classification is increasing rapidly and spontaneously worldwide. Despite some limitations, this classification is easy to implement and interpret. Several suggested modifications could be useful to help facilities and countries as they work towards its implementation.

  18. A Systematic Review of the Robson Classification for Caesarean Section: What Works, Doesn't Work and How to Improve It

    PubMed Central

    Betrán, Ana Pilar; Vindevoghel, Nadia; Souza, Joao Paulo; Gülmezoglu, A. Metin; Torloni, Maria Regina

    2014-01-01

    Background Caesarean sections (CS) rates continue to increase worldwide without a clear understanding of the main drivers and consequences. The lack of a standardized internationally-accepted classification system to monitor and compare CS rates is one of the barriers to a better understanding of this trend. The Robson's 10-group classification is based on simple obstetrical parameters (parity, previous CS, gestational age, onset of labour, fetal presentation and number of fetuses) and does not involve the indication for CS. This classification has become very popular over the last years in many countries. We conducted a systematic review to synthesize the experience of users on the implementation of this classification and proposed adaptations. Methods Four electronic databases were searched. A three-step thematic synthesis approach and a qualitative metasummary method were used. Results 232 unique reports were identified, 97 were selected for full-text evaluation and 73 were included. These publications reported on the use of Robson's classification in over 33 million women from 31 countries. According to users, the main strengths of the classification are its simplicity, robustness, reliability and flexibility. However, missing data, misclassification of women and lack of definition or consensus on core variables of the classification are challenges. To improve the classification for local use and to decrease heterogeneity within groups, several subdivisions in each of the 10 groups have been proposed. Group 5 (women with previous CS) received the largest number of suggestions. Conclusions The use of the Robson classification is increasing rapidly and spontaneously worldwide. Despite some limitations, this classification is easy to implement and interpret. Several suggested modifications could be useful to help facilities and countries as they work towards its implementation. PMID:24892928

  19. A dictionary learning approach for human sperm heads classification.

    PubMed

    Shaker, Fariba; Monadjemi, S Amirhassan; Alirezaie, Javad; Naghsh-Nilchi, Ahmad Reza

    2017-12-01

    To diagnose infertility in men, semen analysis is conducted in which sperm morphology is one of the factors that are evaluated. Since manual assessment of sperm morphology is time-consuming and subjective, automatic classification methods are being developed. Automatic classification of sperm heads is a complicated task due to the intra-class differences and inter-class similarities of class objects. In this research, a Dictionary Learning (DL) technique is utilized to construct a dictionary of sperm head shapes. This dictionary is used to classify the sperm heads into four different classes. Square patches are extracted from the sperm head images. Columnized patches from each class of sperm are used to learn class-specific dictionaries. The patches from a test image are reconstructed using each class-specific dictionary and the overall reconstruction error for each class is used to select the best matching class. Average accuracy, precision, recall, and F-score are used to evaluate the classification method. The method is evaluated using two publicly available datasets of human sperm head shapes. The proposed DL based method achieved an average accuracy of 92.2% on the HuSHeM dataset, and an average recall of 62% on the SCIAN-MorphoSpermGS dataset. The results show a significant improvement compared to a previously published shape-feature-based method. We have achieved high-performance results. In addition, our proposed approach offers a more balanced classifier in which all four classes are recognized with high precision and recall. In this paper, we use a Dictionary Learning approach in classifying human sperm heads. It is shown that the Dictionary Learning method is far more effective in classifying human sperm heads than classifiers using shape-based features. Also, a dataset of human sperm head shapes is introduced to facilitate future research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Typing mineral deposits using their grades and tonnages in an artificial neural network

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    2003-01-01

    A test of the ability of a probabilistic neural network to classify deposits into types on the basis of deposit tonnage and average Cu, Mo, Ag, Au, Zn, and Pb grades is conducted. The purpose is to examine whether this type of system might serve as a basis for integrating geoscience information available in large mineral databases to classify sites by deposit type. Benefits of proper classification of many sites in large regions are relatively rapid identification of terranes permissive for deposit types and recognition of specific sites perhaps worthy of exploring further.Total tonnages and average grades of 1,137 well-explored deposits identified in published grade and tonnage models representing 13 deposit types were used to train and test the network. Tonnages were transformed by logarithms and grades by square roots to reduce effects of skewness. All values were scaled by subtracting the variable's mean and dividing by its standard deviation. Half of the deposits were selected randomly to be used in training the probabilistic neural network and the other half were used for independent testing. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class (type) and each variable (grade or tonnage).Deposit types were selected to challenge the neural network. For many types, tonnages or average grades are significantly different from other types, but individual deposits may plot in the grade and tonnage space of more than one type. Porphyry Cu, porphyry Cu-Au, and porphyry Cu-Mo types have similar tonnages and relatively small differences in grades. Redbed Cu deposits typically have tonnages that could be confused with porphyry Cu deposits, also contain Cu and, in some situations, Ag. Cyprus and kuroko massive sulfide types have about the same tonnages. Cu, Zn, Ag, and Au grades. Polymetallic vein, sedimentary exhalative Zn-Pb, and Zn-Pb skarn types contain many of the same metals. Sediment-hosted Au, Comstock Au-Ag, and low-sulfide Au-quartz vein types are principally Au deposits with differing amounts of Ag.Given the intent to test the neural network under the most difficult conditions, an overall 75% agreement between the experts and the neural network is considered excellent. Among the largestclassification errors are skarn Zn-Pb and Cyprus massive sulfide deposits classed by the neuralnetwork as kuroko massive sulfides—24 and 63% error respectively. Other large errors are the classification of 92% of porphyry Cu-Mo as porphyry Cu deposits. Most of the larger classification errors involve 25 or fewer training deposits, suggesting that some errors might be the result of small sample size. About 91% of the gold deposit types were classed properly and 98% of porphyry Cu deposits were classes as some type of porphyry Cu deposit. An experienced economic geologist would not make many of the classification errors that were made by the neural network because the geologic settings of deposits would be used to reduce errors. In a separate test, the probabilistic neural network correctly classed 93% of 336 deposits in eight deposit types when trained with presence or absence of 58 minerals and six generalized rock types. The overall success rate of the probabilistic neural network when trained on tonnage and average grades would probably be more than 90% with additional information on the presence of a few rock types.

  1. Application of Sal classification to parotid gland fine-needle aspiration cytology: 10-year retrospective analysis of 312 patients.

    PubMed

    Kilavuz, Ahmet Erdem; Songu, Murat; İmre, Abdulkadir; Arslanoğlu, Secil; Özkul, Yilmaz; Pinar, Ercan; Ateş, Düzgün

    2018-05-01

    The accuracy of fine-needle aspiration biopsy (FNAB) is controversial in parotid tumors. We aimed to compare FNAB results with the final histopathological diagnosis and to apply the "Sal classification" to our data and discuss its results and its place in parotid gland cytology. The FNAB cytological findings and final histological diagnosis were assessed retrospectively in 2 different scenarios based on the distribution of nondefinitive cytology, and we applied the Sal classification and determined malignancy rate, sensitivity, and specificity for each category. In 2 different scenarios FNAB sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were found to be 81%, 87%, 54.7%, and 96.1%; and 65.3%, 100%, 100%, and 96.1%, respectively. The malignancy rates and sensitivity and specificity were also calculated and discussed for each Sal category. We believe that the Sal classification has a great potential to be a useful tool in classification of parotid gland cytology. © 2018 Wiley Periodicals, Inc.

  2. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    PubMed

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Early pest detection in soy plantations from hyperspectral measurements: a case study for caterpillar detection

    NASA Astrophysics Data System (ADS)

    Tailanián, Matías; Castiglioni, Enrique; Musé, Pablo; Fernández Flores, Germán.; Lema, Gabriel; Mastrángelo, Pedro; Almansa, Mónica; Fernández Liñares, Ignacio; Fernández Liñares, Germán.

    2015-10-01

    Soybean producers suffer from caterpillar damage in many areas of the world. Estimated average economic losses are annually 500 million USD in Brazil, Argentina, Paraguay and Uruguay. Designing efficient pest control management using selective and targeted pesticide applications is extremely important both from economic and environmental perspectives. With that in mind, we conducted a research program during the 2013-2014 and 2014-2015 planting seasons in a 4,000 ha soybean farm, seeking to achieve early pest detection. Nowadays pest presence is evaluated using manual, labor-intensive counting methods based on sampling strategies which are time consuming and imprecise. The experiment was conducted as follows. Using manual counting methods as ground-truth, a spectrometer capturing reflectance from 400 to 1100 nm was used to measure the reflectance of soy plants. A first conclusion, resulting from measuring the spectral response at leaves level, showed that stress was a property of plants since different leaves with different levels of damage yielded the same spectral response. Then, to assess the applicability of unsupervised classification of plants as healthy, biotic-stressed or abiotic-stressed, feature extraction and selection from leaves spectral signatures, combined with a Supported Vector Machine classifier was designed. Optimization of SVM parameters using grid search with cross-validation, along with classification evaluation by ten-folds cross-validation showed a correct classification rate of 95%, consistently on both seasons. Controlled experiments using cages with different numbers of caterpillars--including caterpillar-free plants--were also conducted to evaluate consistency in trends of the spectral response as well as the extracted features.

  4. The Land Cover Dynamics and Conversion of Agricultural Land in Northwestern Bangladesh, 1973-2003.

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Seelan, S. K.; Rundquist, B. C.

    2006-05-01

    The importance of land cover information describing the nature and extent of land resources and changes over time is increasing; this is especially true in Bangladesh, where land cover is changing rapidly. This paper presents research into the land cover dynamics of northwestern Bangladesh for the period 1973-2003 using Landsat satellite images in combination with field survey data collected in January and February 2005. Land cover maps were produced for eight different years during the study period with an average 73 percent overall classification accuracy. The classification results and post-classification change analysis showed that agriculture is the dominant land cover (occupying 74.5 percent of the study area) and is being reduced at a rate of about 3,000 ha per year. In addition, 6.7 percent of the agricultural land is vulnerable to temporary water logging annually. Despite this loss of agricultural land, irrigated agriculture increased substantially until 2000, but has since declined because of diminishing water availability and uncontrolled extraction of groundwater driven by population pressures and the extended need for food. A good agreement (r = 0.73) was found between increases in irrigated land and the depletion of the shallow groundwater table, a factor affecting widely practiced small-scale irrigation in northwestern Bangladesh. Results quantified the land cover change patterns and the stresses placed on natural resources; additionally, they demonstrated an accurate and economical means to map and analyze changes in land cover over time at a regional scale, which can assist decision makers in land and natural resources management decisions.

  5. Classification of Regional Radiographic Emphysematous Patterns Using Low-Attenuation Gap Length Matrix

    NASA Astrophysics Data System (ADS)

    Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki

    The standard computer-tomography-based method for measuring emphysema uses percentage of area of low attenuation which is called the pixel index (PI). However, the PI method is susceptible to the problem of averaging effect and this causes the discrepancy between what the PI method describes and what radiologists observe. Knowing that visual recognition of the different types of regional radiographic emphysematous tissues in a CT image can be fuzzy, this paper proposes a low-attenuation gap length matrix (LAGLM) based algorithm for classifying the regional radiographic lung tissues into four emphysema types distinguishing, in particular, radiographic patterns that imply obvious or subtle bullous emphysema from those that imply diffuse emphysema or minor destruction of airway walls. Neural network is used for discrimination. The proposed LAGLM method is inspired by, but different from, former texture-based methods like gray level run length matrix (GLRLM) and gray level gap length matrix (GLGLM). The proposed algorithm is successfully validated by classifying 105 lung regions that are randomly selected from 270 images. The lung regions are hand-annotated by radiologists beforehand. The average four-class classification accuracies in the form of the proposed algorithm/PI/GLRLM/GLGLM methods are: 89.00%/82.97%/52.90%/51.36%, respectively. The p-values from the correlation analyses between the classification results of 270 images and pulmonary function test results are generally less than 0.01. The classification results are useful for a followup study especially for monitoring morphological changes with progression of pulmonary disease.

  6. Infant homicide and accidental death in the United States, 1940-2005: ethics and epidemiological classification.

    PubMed

    Riggs, Jack E; Hobbs, Gerald R

    2011-07-01

    Potential ethical issues can arise during the process of epidemiological classification. For example, unnatural infant deaths are classified as accidental deaths or homicides. Societal sensitivity to the physical abuse and neglect of children has increased over recent decades. This enhanced sensitivity could impact reported infant homicide rates. Infant homicide and accident mortality rates in boys and girls in the USA from 1940 to 2005 were analysed. In 1940, infant accident mortality rates were over 20 times greater than infant homicide rates in both boys and girls. After about 1980, when the ratio of infant accident mortality rates to infant homicide rates decreased to less than five, and the sum of infant accident and homicide rates became relatively constant, further decreases in infant accident mortality rates were associated with increases in reported infant homicide rates. These findings suggest that the dramatic decline of accidental infant mortality and recent increased societal sensitivity to child abuse may be related to the increased infant homicide rates observed in the USA since 1980 rather than an actual increase in societal violence directed against infants. Ethical consequences of epidemiological classification, involving the principles of beneficence, non-maleficence and justice, are suggested by observed patterns in infant accidental deaths and homicides in the USA from 1940 to 2005.

  7. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    PubMed

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  8. EEG classification for motor imagery and resting state in BCI applications using multi-class Adaboost extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Lin; Cheng, Wei; Zhang, Jinhua; Wang, Jue

    2016-08-01

    Brain-computer interface (BCI) systems provide an alternative communication and control approach for people with limited motor function. Therefore, the feature extraction and classification approach should differentiate the relative unusual state of motion intention from a common resting state. In this paper, we sought a novel approach for multi-class classification in BCI applications. We collected electroencephalographic (EEG) signals registered by electrodes placed over the scalp during left hand motor imagery, right hand motor imagery, and resting state for ten healthy human subjects. We proposed using the Kolmogorov complexity (Kc) for feature extraction and a multi-class Adaboost classifier with extreme learning machine as base classifier for classification, in order to classify the three-class EEG samples. An average classification accuracy of 79.5% was obtained for ten subjects, which greatly outperformed commonly used approaches. Thus, it is concluded that the proposed method could improve the performance for classification of motor imagery tasks for multi-class samples. It could be applied in further studies to generate the control commands to initiate the movement of a robotic exoskeleton or orthosis, which finally facilitates the rehabilitation of disabled people.

  9. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  10. Plus disease in retinopathy of prematurity: a continuous spectrum of vascular abnormality as basis of diagnostic variability

    PubMed Central

    Campbell, J. Peter; Kalpathy-Cramer, Jayashree; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D.; Hutcheson, Kelly; Shapiro, Michael J.; Repka, Michael X.; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E.; Chan, R.V. Paul; Chiang, Michael F.

    2016-01-01

    Objective To identify patterns of inter-expert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). Design We developed two datasets of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP study, and determined a consensus reference standard diagnosis (RSD) for each image, based on 3 independent image graders and the clinical exam. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. Subjects, Participants, and/or Controls Images obtained during routine ROP screening in neonatal intensive care units. 8 participating experts with >10 years of clinical ROP experience and >5 peer-reviewed ROP publications. Methods, Intervention, or Testing Expert classification of images of plus disease in ROP. Main Outcome Measures Inter-expert agreement (weighted kappa statistic), and agreement and bias on ordinal classification between experts (ANOVA) and the RSD (percent agreement). Results There was variable inter-expert agreement on diagnostic classifications between the 8 experts and the RSD (weighted kappa 0 – 0.75, mean 0.30). RSD agreement ranged from 80 – 94% agreement for the dataset of 100 images, and 29 – 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and pre-plus disease. The two-way ANOVA model suggested a highly significant effect of both image and user on the average score (P<0.05, adjusted R2=0.82 for dataset A, and P< 0.05 and adjusted R2 =0.6615 for dataset B). Conclusions and Relevance There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different “cut-points” for the amounts of vascular abnormality required for presence of plus and pre-plus disease. This has important implications for research, teaching and patient care for ROP, and suggests that a continuous ROP plus disease severity score may more accurately reflect the behavior of expert ROP clinicians, and may better standardize classification in the future. PMID:27591053

  11. Plus Disease in Retinopathy of Prematurity: A Continuous Spectrum of Vascular Abnormality as a Basis of Diagnostic Variability.

    PubMed

    Campbell, J Peter; Kalpathy-Cramer, Jayashree; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D; Hutcheson, Kelly; Shapiro, Michael J; Repka, Michael X; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E; Chan, R V Paul; Chiang, Michael F

    2016-11-01

    To identify patterns of interexpert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). We developed 2 datasets of clinical images as part of the Imaging and Informatics in ROP study and determined a consensus reference standard diagnosis (RSD) for each image based on 3 independent image graders and the clinical examination results. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. Eight participating experts with more than 10 years of clinical ROP experience and more than 5 peer-reviewed ROP publications who analyzed images obtained during routine ROP screening in neonatal intensive care units. Expert classification of images of plus disease in ROP. Interexpert agreement (weighted κ statistic) and agreement and bias on ordinal classification between experts (analysis of variance [ANOVA]) and the RSD (percent agreement). There was variable interexpert agreement on diagnostic classifications between the 8 experts and the RSD (weighted κ, 0-0.75; mean, 0.30). The RSD agreement ranged from 80% to 94% for the dataset of 100 images and from 29% to 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and preplus disease. The 2-way ANOVA model suggested a highly significant effect of both image and user on the average score (dataset A: P < 0.05 and adjusted R 2  = 0.82; and dataset B: P < 0.05 and adjusted R 2  = 0.6615). There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different cut points for the amounts of vascular abnormality required for presence of plus and preplus disease. This has important implications for research, teaching, and patient care for ROP and suggests that a continuous ROP plus disease severity score may reflect more accurately the behavior of expert ROP clinicians and may better standardize classification in the future. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. Automated classification of optical coherence tomography images of human atrial tissue

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-10-01

    Tissue composition of the atria plays a critical role in the pathology of cardiovascular disease, tissue remodeling, and arrhythmogenic substrates. Optical coherence tomography (OCT) has the ability to capture the tissue composition information of the human atria. In this study, we developed a region-based automated method to classify tissue compositions within human atria samples within OCT images. We segmented regional information without prior information about the tissue architecture and subsequently extracted features within each segmented region. A relevance vector machine model was used to perform automated classification. Segmentation of human atrial ex vivo datasets was correlated with trichrome histology and our classification algorithm had an average accuracy of 80.41% for identifying adipose, myocardium, fibrotic myocardium, and collagen tissue compositions.

  13. Incidence Rates and Trend of Serious Farm-Related Injury in Minnesota, 2000-2011.

    PubMed

    Landsteiner, Adrienne M K; McGovern, Patricia M; Alexander, Bruce H; Lindgren, Paula G; Williams, Allan N

    2015-01-01

    Only about 2% of Minnesota's workers were employed in agriculture for the years 2005-2012, this small portion of the workforce accounted for 31% of the 563 work-related deaths that occurred in Minnesota during that same time period. Agricultural fatalities in Minnesota and elsewhere are well documented; however, nonfatal injuries are not. To explore the burden of injury, Minnesota hospital discharge data were used to examine rates and trends of farm injury for the years 2000-2011. Cases were identified through the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), injury codes and external cause of injury codes (E codes). Probable cases were defined as E code E849.1 (occurred on a farm) or E919.0 (involving agricultural machinery). Possible cases were based on five less specific E codes primarily involving animals or pesticides. Multiple data sources were used to estimate the agricultural population. An annual average of over 500 cases was identified as probable, whereas 2,000 cases were identified as possible. Trend analysis of all identified cases indicated a small but significant average annual increase of 1.5% for the time period 2000-2011. Probable cases were predominantly male (81.5%), whereas possible cases were predominantly female (63.9%). The average age of an injury case was 38.5 years, with the majority of injuries occurring in late summer and fall months. Despite the undercount of less serious injuries, hospital discharge data provide a meaningful data source for the identification and surveillance of nonfatal agricultural injuries. These methods could be utilized by other states for ongoing surveillance for nonfatal agricultural injuries.

  14. Percutaneous locking plates for fractures of the distal tibia: our experience and a review of the literature.

    PubMed

    Ahmad, Mudussar Abrar; Sivaraman, Alagappan; Zia, Ahmed; Rai, Amarjit; Patel, Amratlal D

    2012-02-01

    Distal tibial metaphyseal fractures pose many complexities. This study assessed the outcomes of distal tibial fractures treated with medial locking plates. Eighteen patients were selected based on the fracture pattern and classified using the AO classification and stabilized with an AO medial tibial locking plate. Time to fracture union, complications, and outcomes were assessed with the American Orthopedic Foot and Ankle Society Ankle score at 12 months. Sixteen of the 18 patients achieved fracture union, with 1 patient lost to follow-up. Twelve fractures united within 24 weeks, with an average union time of 23.1 weeks. Three delayed unions, two at 28 weeks and one at 56 weeks. The average time to union was 32 weeks in the smokers and 15.3 weeks in the nonsmokers. Five of the 18 patients (27%) developed complications. One superficial wound infection, and one chronic wound infection, resulting in nonunion at 56 weeks, requiring revision. Two patients required plate removal, one after sustaining an open fracture at the proximal end of the plate 6 months after surgery (postfracture union)and the other for painful hardware. One patient had implant failure of three proximal diaphyseal locking screws at the screwhead/neck junction, but successful fracture union. The average American Orthopedic Foot and Ankle Society ankle score was 88.8 overall, and 92.1 in fractures that united within 24 weeks. Distal tibial locking plates have high fracture union rates, minimum soft tissue complications, and good functional outcomes. The literature shows similar fracture union and complication rates in locking and nonlocking plates. Copyright © 2012 by Lippincott Williams & Wilkins

  15. Spectral dependence of texture features integrated with hyperspectral data for area target classification improvement

    NASA Astrophysics Data System (ADS)

    Bangs, Corey F.; Kruse, Fred A.; Olsen, Chris R.

    2013-05-01

    Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the "Urban" class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the "Dirt Path" class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.

  16. Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data

    NASA Astrophysics Data System (ADS)

    Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.

    2017-06-01

    Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.

  17. Real-Time Fault Classification for Plasma Processes

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2011-01-01

    Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001

  18. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  19. Reliability of classification for post-traumatic ankle osteoarthritis.

    PubMed

    Claessen, Femke M A P; Meijer, Diederik T; van den Bekerom, Michel P J; Gevers Deynoot, Barend D J; Mallee, Wouter H; Doornberg, Job N; van Dijk, C Niek

    2016-04-01

    The purpose of this study was to identify the most reliable classification system for clinical outcome studies to categorize post-traumatic-fracture-osteoarthritis. A total of 118 orthopaedic surgeons and residents-gathered in the Ankle Platform Study Collaborative Science of Variation Group-evaluated 128 anteroposterior and lateral radiographs of patients after a bi- or trimalleolar ankle fracture on a Web-based platform in order to rate post-traumatic osteoarthritis according to the classification systems coined by (1) van Dijk, (2) Kellgren, and (3) Takakura. Reliability was evaluated with the use of the Siegel and Castellan's multirater kappa measure. Differences between classification systems were compared using the two-sample Z-test. Interobserver agreement of surgeons who participated in the survey was fair for the van Dijk osteoarthritis scale (k = 0.24), and poor for the Takakura (k = 0.19) and the Kellgren systems (k = 0.18) according to the categorical rating of Landis and Koch. This difference in one categorical rating was found to be significant (p < 0.001, CI 0.046-0.053) with the high numbers of observers and cases available. This study documents fair interobserver agreement for the van Dijk osteoarthritis scale, and poor interobserver agreement for the Takakura and Kellgren osteoarthritis classification systems. Because of the low interobserver agreement for the van Dijk, Kellgren, and Takakura classification systems, those systems cannot be used for clinical decision-making. Development of diagnostic criteria on basis of consecutive patients, Level II.

  20. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

Top