Science.gov

Sample records for accurate exposure classification

  1. TIME-INTEGRATED EXPOSURE MEASURES TO IMPROVE THE PREDICTIVE POWER OF EXPOSURE CLASSIFICATION FOR EPIDEMIOLOGIC STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Although long-term integrated exposure measurements are a critical component of exposure assessment, the ability to include these measurements into epidemiologic...

  2. Accurate molecular classification of cancer using simple rules

    PubMed Central

    Wang, Xiaosheng; Gotoh, Osamu

    2009-01-01

    Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV) of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML]), lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML). Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction. PMID:19874631

  3. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    complexity of defects encountered. The variety arises due to factors such as defect nature, size, shape and composition; and the optical phenomena occurring around the defect. This paper focuses on preliminary characterization results, in terms of classification and size estimation, obtained by Calibre MDPAutoClassify tool on a variety of mask blank defects. It primarily highlights the challenges faced in achieving the results with reference to the variety of defects observed on blank mask substrates and the underlying complexities which make accurate defect size measurement an important and challenging task.

  4. Accurate mobile malware detection and classification in the cloud.

    PubMed

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service. PMID:26543718

  5. Accurate phylogenetic classification of DNA fragments based onsequence composition

    SciTech Connect

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  6. Accurate cortical tissue classification on MRI by modeling cortical folding patterns.

    PubMed

    Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea

    2015-09-01

    Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453

  7. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  8. Highly accurate recognition of human postures and activities through classification with rejection.

    PubMed

    Tang, Wenlong; Sazonov, Edward S

    2014-01-01

    Monitoring of postures and activities is used in many clinical and research applications, some of which may require highly reliable posture and activity recognition with desired accuracy well above 99% mark. This paper suggests a method for performing highly accurate recognition of postures and activities from data collected by a wearable shoe monitor (SmartShoe) through classification with rejection. Signals from pressure and acceleration sensors embedded in SmartShoe are used either as raw sensor data or after feature extraction. The Support vector machine (SVM) and multilayer perceptron (MLP) are used to implement classification with rejection. Unreliable observations are rejected by measuring the distance from the decision boundary and eliminating those observations that reside below rejection threshold. The results show a significant improvement (from 97.3% ± 2.3% to 99.8% ± 0.1%) in the classification accuracy after the rejection, using MLP with raw sensor data and rejecting 31.6% of observations. The results also demonstrate that MLP outperformed the SVM, and the classification accuracy based on raw sensor data was higher than the accuracy based on extracted features. The proposed approach will be especially beneficial in applications where high accuracy of recognition is desired while not all observations need to be assigned a class label. PMID:24403429

  9. Change in BMI Accurately Predicted by Social Exposure to Acquaintances

    PubMed Central

    Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID

  10. Accurate measurement of RF exposure from emerging wireless communication systems

    NASA Astrophysics Data System (ADS)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  11. Accurate, Rapid Taxonomic Classification of Fungal Large-Subunit rRNA Genes

    PubMed Central

    Liu, Kuan-Liang; Porras-Alfaro, Andrea; Eichorst, Stephanie A.

    2012-01-01

    Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project (http://rdp.cme.msu.edu/classifier/classifier.jsp). PMID:22194300

  12. GPD: a graph pattern diffusion kernel for accurate graph classification with applications in cheminformatics.

    PubMed

    Smalter, Aaron; Huan, Jun Luke; Jia, Yi; Lushington, Gerald

    2010-01-01

    Graph data mining is an active research area. Graphs are general modeling tools to organize information from heterogeneous sources and have been applied in many scientific, engineering, and business fields. With the fast accumulation of graph data, building highly accurate predictive models for graph data emerges as a new challenge that has not been fully explored in the data mining community. In this paper, we demonstrate a novel technique called graph pattern diffusion (GPD) kernel. Our idea is to leverage existing frequent pattern discovery methods and to explore the application of kernel classifier (e.g., support vector machine) in building highly accurate graph classification. In our method, we first identify all frequent patterns from a graph database. We then map subgraphs to graphs in the graph database and use a process we call "pattern diffusion" to label nodes in the graphs. Finally, we designed a graph alignment algorithm to compute the inner product of two graphs. We have tested our algorithm using a number of chemical structure data. The experimental results demonstrate that our method is significantly better than competing methods such as those kernel functions based on paths, cycles, and subgraphs. PMID:20431140

  13. Accurate multi-source forest species mapping using the multiple spectral-spatial classification approach

    NASA Astrophysics Data System (ADS)

    Stavrakoudis, Dimitris; Gitas, Ioannis; Karydas, Christos; Kolokoussis, Polychronis; Karathanassi, Vassilia

    2015-10-01

    This paper proposes an efficient methodology for combining multiple remotely sensed imagery, in order to increase the classification accuracy in complex forest species mapping tasks. The proposed scheme follows a decision fusion approach, whereby each image is first classified separately by means of a pixel-wise Fuzzy-Output Support Vector Machine (FO-SVM) classifier. Subsequently, the multiple results are fused according to the so-called multiple spectral- spatial classifier using the minimum spanning forest (MSSC-MSF) approach, which constitutes an effective post-regularization procedure for enhancing the result of a single pixel-based classification. For this purpose, the original MSSC-MSF has been extended in order to handle multiple classifications. In particular, the fuzzy outputs of the pixel-based classifiers are stacked and used to grow the MSF, whereas the markers are also determined considering both classifications. The proposed methodology has been tested on a challenging forest species mapping task in northern Greece, considering a multispectral (GeoEye) and a hyper-spectral (CASI) image. The pixel-wise classifications resulted in overall accuracies (OA) of 68.71% for the GeoEye and 77.95% for the CASI images, respectively. Both of them are characterized by high levels of speckle noise. Applying the proposed multi-source MSSC-MSF fusion, the OA climbs to 90.86%, which is attributed both to the ability of MSSC-MSF to tackle the salt-and-pepper effect, as well as the fact that the fusion approach exploits the relative advantages of both information sources.

  14. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis.

    PubMed

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS

  15. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis

    PubMed Central

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS

  16. Use B-factor related features for accurate classification between protein binding interfaces and crystal packing contacts

    PubMed Central

    2014-01-01

    Background Distinction between true protein interactions and crystal packing contacts is important for structural bioinformatics studies to respond to the need of accurate classification of the rapidly increasing protein structures. There are many unannotated crystal contacts and there also exist false annotations in this rapidly expanding volume of data. Previous tools have been proposed to address this problem. However, challenging issues still remain, such as low performance when the training and test data contain mixed interfaces having diverse sizes of contact areas. Methods and results B factor is a measure to quantify the vibrational motion of an atom, a more relevant feature than interface size to characterize protein binding. We propose to use three features related to B factor for the classification between biological interfaces and crystal packing contacts. The first feature is the sum of the normalized B factors of the interfacial atoms in the contact area, the second is the average of the interfacial B factor per residue in the chain, and the third is the average number of interfacial atoms with a negative normalized B factor per residue in the chain. We investigate the distribution properties of these basic features and a compound feature on four datasets of biological binding and crystal packing, and on a protein binding-only dataset with known binding affinity. We also compare the cross-dataset classification performance of these features with existing methods and with a widely-used and the most effective feature interface area. The results demonstrate that our features outperform the interface area approach and the existing prediction methods remarkably for many tests on all of these datasets. Conclusions The proposed B factor related features are more effective than interface area to distinguish crystal packing from biological binding interfaces. Our computational methods have a potential for large-scale and accurate identification of biological

  17. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. PMID:26797472

  18. Exposure Classification and Temporal Variability in Urinary Bisphenol A Concentrations among Couples in Utah—The HOPE Study

    PubMed Central

    Cox, Kyley J.; Porucznik, Christina A.; Anderson, David J.; Brozek, Eric M.; Szczotka, Kathryn M.; Bailey, Nicole M.; Wilkins, Diana G.; Stanford, Joseph B.

    2015-01-01

    Background: Bisphenol A (BPA) is an endocrine disruptor and potential reproductive toxicant, but results of epidemiologic studies have been mixed and have been criticized for inadequate exposure assessment that often relies on a single measurement. Objective: Our goal was to describe the distribution of BPA concentrations in serial urinary specimens, assess temporal variability, and provide estimates of exposure classification when randomly selected samples are used to predict average exposure. Methods: We collected and analyzed 2,614 urine specimens from 83 Utah couples beginning in 2012. Female participants collected daily first-morning urine specimens during one to two menstrual cycles and male partners collected specimens during the woman’s fertile window for each cycle. We measured urinary BPA concentrations and calculated geometric means (GM) for each cycle, characterized the distribution of observed values and temporal variability using intraclass correlation coefficients, and performed surrogate category analyses to determine how well repeat samples could classify exposure. Results: The GM urine BPA concentration was 2.78 ng/mL among males and 2.44 ng/mL among females. BPA had a high degree of variability among both males (ICC = 0.18; 95% CI: 0.11, 0.26) and females (ICC = 0.11; 95% CI: 0.08, 0.16). Based on our more stringent surrogate category analysis, to reach proportions ≥ 0.80 for sensitivity, specificity, and positive predictive value (PPV) among females, 6 and 10 repeat samples for the high and low tertiles, respectively, were required. For the medium tertile, specificity reached 0.87 with 10 repeat samples, but even with 11 samples, sensitivity and PPV did not exceed 0.36. Five repeat samples, among males, yielded sensitivity and PPV values ≥ 0.75 for the high and low tertiles, but, similar to females, classification for the medium tertile was less accurate. Conclusion: Repeated urinary specimens are required to characterize typical BPA

  19. Classification

    NASA Astrophysics Data System (ADS)

    Oza, Nikunj

    2012-03-01

    would represent one sunspot’s classification (y_i) and the corresponding set of measurements (x_i). The output of a supervised learning algorithm is a model h that approximates the unknown mapping from the inputs to the outputs. In our example, h would map from the sunspot measurements to the type of sunspot. We may have a test set S—a set of examples not used in training that we use to test how well the model h predicts the outputs on new examples. Just as with the examples in T, the examples in S are assumed to be independent and identically distributed (i.i.d.) draws from the distribution D. We measure the error of h on the test set as the proportion of test cases that h misclassifies: 1/|S| Sigma(x,y union S)[I(h(x)!= y)] where I(v) is the indicator function—it returns 1 if v is true and 0 otherwise. In our sunspot classification example, we would identify additional examples of sunspots that were not used in generating the model, and use these to determine how accurate the model is—the fraction of the test samples that the model classifies correctly. An example of a classification model is the decision tree shown in Figure 23.1. We will discuss the decision tree learning algorithm in more detail later—for now, we assume that, given a training set with examples of sunspots, this decision tree is derived. This can be used to classify previously unseen examples of sunpots. For example, if a new sunspot’s inputs indicate that its "Group Length" is in the range 10-15, then the decision tree would classify the sunspot as being of type “E,” whereas if the "Group Length" is "NULL," the "Magnetic Type" is "bipolar," and the "Penumbra" is "rudimentary," then it would be classified as type "C." In this chapter, we will add to the above description of classification problems. We will discuss decision trees and several other classification models. In particular, we will discuss the learning algorithms that generate these classification models, how to use them to

  20. Response to “Accurate Risk-Based Chemical Screening Relies on Robust Exposure Estimates”

    EPA Science Inventory

    This is a correspondence (letter to the editor) with reference to comments by Rudel and Perovich on the article "Integration of Dosimetry, Exposure, and High-Throughput Screening Data in Chemical Toxicity Assessment". Article Reference: SI # 238882

  1. Molecular-genetic analysis is essential for accurate classification of renal carcinoma resembling Xp11.2 translocation carcinoma.

    PubMed

    Hayes, Malcolm; Peckova, Kvetoslava; Martinek, Petr; Hora, Milan; Kalusova, Kristyna; Straka, Lubomir; Daum, Ondrej; Kokoskova, Bohuslava; Rotterova, Pavla; Pivovarčikova, Kristyna; Branzovsky, Jindrich; Dubova, Magdalena; Vesela, Pavla; Michal, Michal; Hes, Ondrej

    2015-03-01

    Xp11.2-translocation renal carcinoma (TRCC) is suspected when a renal carcinoma occurs in young patients, patients with a prior history of exposure to chemotherapy and when the neoplasm has morphological features suggestive of that entity. We retrieved 20 renal tumours (from 17,500 archival cases) of which morphology arose suspicion for TRCC. In nine cases, TFE3 translocation was confirmed by fluorescence in situ hybridisation analysis. In 9 of the remaining 11 TRCC-like cases (7 male, 4 female, aged 22-84 years), material was available for further study. The morphological spectrum was diverse. Six tumours showed a mixture of cells with eosinophilic or clear cytoplasm in tubular, acinar and papillary architecture. One case was high grade with epithelioid, spindle cell and sarcomatoid areas. Another showed tubular, solid, and papillary areas and foci containing spindle cells reminiscent of mucinous tubular and spindle cell carcinoma. The third showed dyscohesive nests of large epithelioid and histiocytoid cells in a background of dense lymphoplasmacytic infiltrate. By immunohistochemistry, keratin AE1/AE3 was diffusely positive in three tumours, while CK7 strongly stained one tumour and another focally and weakly. CD10 and Pax8 were expressed by eight, AMACR and vimentin by seven, CA-IX by four and TFE3 and cathepsin K by two tumours. Of the two TFE3-positive tumours, one showed polysomy of chromosome 7 and the other of 17; they were VHL normal and diagnosed as unclassifiable RCC. Of the seven TFE3-negative tumours, three showed polysomy of 7/17 and VHL abnormality and were diagnosed as combined clear cell RCC/papillary RCC. One TFE3-negative tumour with normal 7/17 but LOH 3p (VHL abnormality) was diagnosed as clear cell RCC. One TFE3-negative tumour with polysomy 7/17 but normal VHL was diagnosed as papillary RCC, and two with normal chromosomes 7/17 and VHL gene were considered unclassifiable. As morphological features and IHC are heterogeneous, TRCC-like renal

  2. Classification

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  3. Analysis of continuous oxygen saturation data for accurate representation of retinal exposure to oxygen in the preterm infant.

    PubMed

    Cirelli, Josie; McGregor, Carolyn; Graydon, Brenda; James, Andrew

    2013-01-01

    Maintaining blood oxygen saturation within the intended target range for preterm infants receiving neonatal intensive care is challenging. Supplemental oxygen is believed to lead to increased risk of retinopathy of prematurity and hence managing the level of oxygen within this population is important within their care. Current quality improvement activities use coarse hourly spot readings to measure supplemental oxygen levels as associated with targeted ranges that vary based on gestational age. In this research we use Artemis, a real-time online healthcare analytics platform to ascertain if the collection of second by second data provides a better representation of retinal exposure to oxygen than an infrequent, intermittent spot reading. We show that Artemis is capable of producing more accurate information from the higher frequency data, as it includes all the episodic events in the activity of the hour, which provides a better understanding of oxygen fluctuation ranges which affect the physiological status of the infant. PMID:23388268

  4. Exposure assessment within a Total Diet Study: a comparison of the use of the pan-European classification system FoodEx-1 with national food classification systems.

    PubMed

    Akhandaf, Y; Van Klaveren, J; De Henauw, S; Van Donkersgoed, G; Van Gorcum, T; Papadopoulos, A; Sirot, V; Kennedy, M; Pinchen, H; Ruprich, J; Rehurkova, I; Perelló, G; Sioen, I

    2015-04-01

    A Total Diet Study (TDS) consists of selecting, collecting and preparing commonly consumed foods purchased at retail level and analysing them for harmful and/or beneficial chemical substances. A food classification system is needed to link food consumption data with the contaminant concentration data obtained in the TDS for the exposure assessment. In this study a comparison was made between the use of a national food classification systems and the use of FoodEx-1, developed and recommended by the European Food Safety Authority (EFSA). The work was performed using data of six European countries: Belgium, Czech Republic, France, The Netherlands, Spain and the UK. For each population, exposure to contaminant A (organic compounds) and/or contaminant B (inorganic compound) was assessed by the Monte Carlo Risk Assessment (MCRA) software using the national classification system and FoodEx-1 for food consumption data and for TDS laboratory results. Minimal differences between both approaches were observed. This observation applied for both contaminant A and contaminant B. In general risk assessment will be similar for both approaches; however, this is not guaranteed. FoodEx-1 proved to be a valuable hierarchic classification system in order to harmonise exposure assessment based on existing TDS results throughout Europe. PMID:25662864

  5. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection.

    PubMed

    Bechet, P; Mitran, R; Munteanu, M

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact. PMID:24007088

  6. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  7. Is the SIOP-2001 Classification of Renal Tumors of Childhood accurate with regard to prognosis? A problem revisited

    PubMed Central

    Taran, Katarzyna; Młynarski, Wojciech; Sitkiewicz, Anna

    2012-01-01

    Introduction The goal of this study was to analyze morbidity and mortality of Wilms’ tumor based on the revised SIOP-2001 classification. Material and methods Sixty-four patients with unilateral Wilms’ tumor, 33 girls (51.5%) and 31 boys (48.5%), aged 1 to 144 months (mean: 42.8 months) were treated between 1993 and 2009. All patients underwent multimodal therapy according to the SIOP protocols. The follow-up period ranged from 2 to 18 years (mean: 11.6 years). Results Thirty-three patients (51.6%) had intermediate-risk, 6 (9.4%) low-risk and 25 (39%) high-risk tumors. Stage I disease was diagnosed in 28 (43.7%), stage II in 19 (29.7%), stage III in 8 (12.5%) and stage IV in 9 patients (14.1%). Event-free survival (EFS) in the entire group was 78.1% and OS was 92.2%. The EFS in stage IV (44.4%) was significantly lower than in stage I (82.1%, p = 0.04), stage II (89.5%, p = 0.02) and in the entire group (78.1%, p = 0.04). Sixteen complications were observed in 14 children (21.9%); metastases in 7 cases (10.9%), 8 relapses (12.5%) and 5 deaths (7.8%). Blastemal (20/24 – 83.3%) and anaplastic (3/24 – 12.5%) subtypes were responsible for mortality in high-risk tumors (OS – 87.5%), while poorly differentiated epithelial (7/34 – 20.6%) and regressive (8/34 – 23.5%) subtypes decreased OS (94.1%) in the intermediate-risk tumors. Conclusions The results of our study show that epithelial and regressive subtypes were responsible for mortality in the intermediate-risk Wilms’ tumors. PMID:23056081

  8. 16S classifier: a tool for fast and accurate taxonomic classification of 16S rRNA hypervariable regions in metagenomic datasets.

    PubMed

    Chaudhary, Nikhil; Sharma, Ashok K; Agarwal, Piyush; Gupta, Ankit; Sharma, Vineet K

    2015-01-01

    The diversity of microbial species in a metagenomic study is commonly assessed using 16S rRNA gene sequencing. With the rapid developments in genome sequencing technologies, the focus has shifted towards the sequencing of hypervariable regions of 16S rRNA gene instead of full length gene sequencing. Therefore, 16S Classifier is developed using a machine learning method, Random Forest, for faster and accurate taxonomic classification of short hypervariable regions of 16S rRNA sequence. It displayed precision values of up to 0.91 on training datasets and the precision values of up to 0.98 on the test dataset. On real metagenomic datasets, it showed up to 99.7% accuracy at the phylum level and up to 99.0% accuracy at the genus level. 16S Classifier is available freely at http://metagenomics.iiserb.ac.in/16Sclassifier and http://metabiosys.iiserb.ac.in/16Sclassifier. PMID:25646627

  9. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  10. Solar ultraviolet and the occupational radiant exposure of Queensland school teachers: A comparative study between teaching classifications and behavior patterns.

    PubMed

    Downs, Nathan J; Harrison, Simone L; Chavez, Daniel R Garzon; Parisi, Alfio V

    2016-05-01

    Classroom teachers located in Queensland, Australia are exposed to high levels of ambient solar ultraviolet as part of the occupational requirement to provide supervision of children during lunch and break times. We investigated the relationship between periods of outdoor occupational radiant exposure and available ambient solar radiation across different teaching classifications and schools relative to the daily occupational solar ultraviolet radiation (HICNIRP) protection standard of 30J/m(2). Self-reported daily sun exposure habits (n=480) and personal radiant exposures were monitored using calibrated polysulphone dosimeters (n=474) in 57 teaching staff from 6 different schools located in tropical north and southern Queensland. Daily radiant exposure patterns among teaching groups were compared to the ambient UV-Index. Personal sun exposures were stratified among teaching classifications, school location, school ownership (government vs non-government), and type (primary vs secondary). Median daily radiant exposures were 15J/m(2) and 5J/m(2)HICNIRP for schools located in northern and southern Queensland respectively. Of the 474 analyzed dosimeter-days, 23.0% were found to exceed the solar radiation protection standard, with the highest prevalence found among physical education teachers (57.4% dosimeter-days), followed by teacher aides (22.6% dosimeter-days) and classroom teachers (18.1% dosimeter-days). In Queensland, peak outdoor exposure times of teaching staff correspond with periods of extreme UV-Index. The daily occupational HICNIRP radiant exposure standard was exceeded in all schools and in all teaching classifications. PMID:26963432

  11. Effect of Item Selection on Item Exposure Rates within a Computerized Classification Test.

    ERIC Educational Resources Information Center

    Kalohn, John C.; Spray, Judith A.

    The purpose of many certification or licensure tests is to identify candidates who possess some level of minimum competence to practice their profession. In general, this type of test is referred to as classification testing. When this type of test is administered with a computer, the test is a computerized classification test (CCT). This paper…

  12. A Stochastic Method for Balancing Item Exposure Rates in Computerized Classification Tests

    ERIC Educational Resources Information Center

    Huebner, Alan; Li, Zhushan

    2012-01-01

    Computerized classification tests (CCTs) classify examinees into categories such as pass/fail, master/nonmaster, and so on. This article proposes the use of stochastic methods from sequential analysis to address item overexposure, a practical concern in operational CCTs. Item overexposure is traditionally dealt with in CCTs by the Sympson-Hetter…

  13. a Single-Exposure Dual-Energy Computed Radiography Technique for Improved Nodule Detection and Classification in Chest Imaging

    NASA Astrophysics Data System (ADS)

    Zink, Frank Edward

    The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology

  14. Encoding negative events under stress: high subjective arousal is related to accurate emotional memory despite misinformation exposure.

    PubMed

    Hoscheidt, Siobhan M; LaBar, Kevin S; Ryan, Lee; Jacobs, W Jake; Nadel, Lynn

    2014-07-01

    Stress at encoding affects memory processes, typically enhancing, or preserving, memory for emotional information. These effects have interesting implications for eyewitness accounts, which in real-world contexts typically involve encoding an aversive event under stressful conditions followed by potential exposure to misinformation. The present study investigated memory for a negative event encoded under stress and subsequent misinformation endorsement. Healthy young adults participated in a between-groups design with three experimental sessions conducted 48 h apart. Session one consisted of a psychosocial stress induction (or control task) followed by incidental encoding of a negative slideshow. During session two, participants were asked questions about the slideshow, during which a random subgroup was exposed to misinformation. Memory for the slideshow was tested during the third session. Assessment of memory accuracy across stress and no-stress groups revealed that stress induced just prior to encoding led to significantly better memory for the slideshow overall. The classic misinformation effect was also observed - participants exposed to misinformation were significantly more likely to endorse false information during memory testing. In the stress group, however, memory accuracy and misinformation effects were moderated by arousal experienced during encoding of the negative event. Misinformed-stress group participants who reported that the negative slideshow elicited high arousal during encoding were less likely to endorse misinformation for the most aversive phase of the story. Furthermore, these individuals showed better memory for components of the aversive slideshow phase that had been directly misinformed. Results from the current study provide evidence that stress and high subjective arousal elicited by a negative event act concomitantly during encoding to enhance emotional memory such that the most aversive aspects of the event are well remembered and

  15. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    maximum a posteriori (MAP) Bayesian analysis. Although mean bias was less when data were analysed using multiple linear regression, MAP Bayesian analysis is preferable because of its flexibility with respect to sample timing. Estimation of MPA AUC12 following EC-MPS administration using a limited sampling strategy with samples drawn within 3 h post-dose resulted in biased and imprecise results, likely due to a longer time to reach a peak MPA concentration (t max) with this formulation and more variable pharmacokinetic profiles. Inclusion of later sampling time points that capture enterohepatic recirculation and t max improved the predictive performance of strategies to predict EC-MPS exposure. Given the considerable pharmacokinetic variability associated with mycophenolate therapy, limited sampling strategies may potentially help in individualizing patient dosing. However, a compromise needs to be made between the predictive performance of the strategy and its clinical feasibility. An opportunity exists to combine research efforts globally to create an open-source database for MPA (AUC, concentrations and outcomes) that can be used and prospectively evaluated for AUC target-controlled dosing of MPA in autoimmune diseases. PMID:24327238

  16. Relationship between skin color and sun exposure history: a statistical classification approach.

    PubMed

    Rubegni, P; Cevenini, G; Flori, M L; Fimiani, M; Stanghellini, E; Molinu, A; Barbini, P; Andreassi, L

    1997-02-01

    In this study our aim was to determine the biophysical values of constitutive skin color in Caucasians and to define the correlation between skin color and phototype assessed according to the Fitzpatrick method. Constitutive skin color was measured on the buttock, with a Minolta CR-200 colorimeter, in a population-of 557 consecutive subjects belonging to phototype categories I, II, III and IV. The colorimeter expresses the results in five different color systems. We used the "Yxy" and L*a*b* systems, which are the most widespread in dermatology. Statistical analysis of the data showed that the "Yxy" system is even more discriminant than the L*a*b* system when the Fitzpatrick classification scheme is adopted as the reference and shows a poor ability to correctly classify the intermediate phototypes (II and III). On the contrary the "Yxy" system performs well in distinguishing phototypes I and IV. To establish whether this low discriminating capacity for phototypes II and III is related to a low discriminating capacity of the method suggested by Fitzpatrick or by our procedure, an objective technique (minimal erythemal dose) should be used to evaluate the percentage errors of classification of both the Fitzpatrick method and instrumental measurement of skin color. The results of such a study are extremely important because the evaluation of skin color is objective, simple and has potential applications in dermatology and cosmetology. PMID:9066310

  17. EXPOSURE ASSESSMENT METHODS DEVELOPMENT PILOTS FOR THE NATIONAL CHILDREN'S STUDY

    EPA Science Inventory

    Accurate exposure classification tools are needed to link exposure with health effects. EPA began methods development pilot studies in 2000 to address general questions about exposures and outcome measures. Selected pilot studies are highlighted in this poster. The “Literature Re...

  18. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  19. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  20. Accurate age classification of 6 and 12 month-old infants based on resting-state functional connectivity magnetic resonance imaging data

    PubMed Central

    Pruett, John R.; Kandala, Sridhar; Hoertel, Sarah; Snyder, Abraham Z.; Elison, Jed T.; Nishino, Tomoyuki; Feczko, Eric; Dosenbach, Nico U.F.; Nardos, Binyam; Power, Jonathan D.; Adeyemo, Babatunde; Botteron, Kelly N.; McKinstry, Robert C.; Evans, Alan C.; Hazlett, Heather C.; Dager, Stephen R.; Paterson, Sarah; Schultz, Robert T.; Collins, D. Louis; Fonov, Vladimir S.; Styner, Martin; Gerig, Guido; Das, Samir; Kostopoulos, Penelope; Constantino, John N.; Estes, Annette M.; Petersen, Steven E.; Schlaggar, Bradley L.; Piven, Joseph

    2015-01-01

    Human large-scale functional brain networks are hypothesized to undergo significant changes over development. Little is known about these functional architectural changes, particularly during the second half of the first year of life. We used multivariate pattern classification of resting-state functional connectivity magnetic resonance imaging (fcMRI) data obtained in an on-going, multi-site, longitudinal study of brain and behavioral development to explore whether fcMRI data contained information sufficient to classify infant age. Analyses carefully account for the effects of fcMRI motion artifact. Support vector machines (SVMs) classified 6 versus 12 month-old infants (128 datasets) above chance based on fcMRI data alone. Results demonstrate significant changes in measures of brain functional organization that coincide with a special period of dramatic change in infant motor, cognitive, and social development. Explorations of the most different correlations used for SVM lead to two different interpretations about functional connections that support 6 versus 12-month age categorization. PMID:25704288

  1. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  2. Rapid and accurate taxonomic classification of insect (class Insecta) cytochrome c oxidase subunit 1 (COI) DNA barcode sequences using a naïve Bayesian classifier

    PubMed Central

    Porter, Teresita M; Gibson, Joel F; Shokralla, Shadi; Baird, Donald J; Golding, G Brian; Hajibabaei, Mehrdad

    2014-01-01

    Current methods to identify unknown insect (class Insecta) cytochrome c oxidase (COI barcode) sequences often rely on thresholds of distances that can be difficult to define, sequence similarity cut-offs, or monophyly. Some of the most commonly used metagenomic classification methods do not provide a measure of confidence for the taxonomic assignments they provide. The aim of this study was to use a naïve Bayesian classifier (Wang et al. Applied and Environmental Microbiology, 2007; 73: 5261) to automate taxonomic assignments for large batches of insect COI sequences such as data obtained from high-throughput environmental sequencing. This method provides rank-flexible taxonomic assignments with an associated bootstrap support value, and it is faster than the blast-based methods commonly used in environmental sequence surveys. We have developed and rigorously tested the performance of three different training sets using leave-one-out cross-validation, two field data sets, and targeted testing of Lepidoptera, Diptera and Mantodea sequences obtained from the Barcode of Life Data system. We found that type I error rates, incorrect taxonomic assignments with a high bootstrap support, were already relatively low but could be lowered further by ensuring that all query taxa are actually present in the reference database. Choosing bootstrap support cut-offs according to query length and summarizing taxonomic assignments to more inclusive ranks can also help to reduce error while retaining the maximum number of assignments. Additionally, we highlight gaps in the taxonomic and geographic representation of insects in public sequence databases that will require further work by taxonomists to improve the quality of assignments generated using any method.

  3. Classification of personal exposure to radio frequency electromagnetic fields (RF-EMF) for epidemiological research: Evaluation of different exposure assessment methods.

    PubMed

    Frei, Patrizia; Mohler, Evelyn; Bürgi, Alfred; Fröhlich, Jürg; Neubauer, Georg; Braun-Fahrländer, Charlotte; Röösli, Martin

    2010-10-01

    The use of personal exposure meters (exposimeters) has been recommended for measuring personal exposure to radio frequency electromagnetic fields (RF-EMF) from environmental far-field sources in everyday life. However, it is unclear to what extent exposimeter readings are affected by measurements taken when personal mobile and cordless phones are used. In addition, the use of exposimeters in large epidemiological studies is limited due to high costs and large effort for study participants. In the current analysis we aimed to investigate the impact of personal phone use on exposimeter readings and to evaluate different exposure assessment methods potentially useful in epidemiological studies. We collected personal exposimeter measurements during one week and diary data from 166 study participants. Moreover, we collected spot measurements in the participants' bedrooms and data on self-estimated exposure, assessed residential exposure to fixed site transmitters by calculating the geo-coded distance and mean RF-EMF from a geospatial propagation model, and developed an exposure prediction model based on the propagation model and exposure relevant behavior. The mean personal exposure was 0.13 mW/m(2), when measurements during personal phone calls were excluded and 0.15 mW/m(2), when such measurements were included. The Spearman correlation with personal exposure (without personal phone calls) was 0.42 (95%-CI: 0.29 to 0.55) for the spot measurements, -0.03 (95%-CI: -0.18 to 0.12) for the geo-coded distance, 0.28 (95%-CI: 0.14 to 0.42) for the geospatial propagation model, 0.50 (95%-CI: 0.37 to 0.61) for the full exposure prediction model and 0.06 (95%-CI: -0.10 to 0.21) for self-estimated exposure. In conclusion, personal exposure measured with exposimeters correlated best with the full exposure prediction model and spot measurements. Self-estimated exposure and geo-coded distance turned out to be poor surrogates for personal exposure. PMID:20538340

  4. Improvement of the Cramer classification for oral exposure using the database TTC RepDose - A strategy description

    EPA Science Inventory

    The present report describes a strategy to refine the current Cramer classification of the TTC concept using a broad database (DB) termed TTC RepDose. Cramer classes 1-3 overlap to some extent, indicating a need for a better separation of structural classes likely to be toxic, mo...

  5. Malingering in Toxic Exposure. Classification Accuracy of Reliable Digit Span and WAIS-III Digit Span Scaled Scores

    ERIC Educational Resources Information Center

    Greve, Kevin W.; Springer, Steven; Bianchini, Kevin J.; Black, F. William; Heinly, Matthew T.; Love, Jeffrey M.; Swift, Douglas A.; Ciota, Megan A.

    2007-01-01

    This study examined the sensitivity and false-positive error rate of reliable digit span (RDS) and the WAIS-III Digit Span (DS) scaled score in persons alleging toxic exposure and determined whether error rates differed from published rates in traumatic brain injury (TBI) and chronic pain (CP). Data were obtained from the files of 123 persons…

  6. Ozone exposure and cardiovascular-related mortality in the Canadian Census Health and Environment Cohort (CANCHEC) by spatial synoptic classification zone.

    PubMed

    Cakmak, Sabit; Hebbern, Chris; Vanos, Jennifer; Crouse, Dan L; Burnett, Rick

    2016-07-01

    Our objective is to analyse the association between long term ozone exposure and cardiovascular related mortality while accounting for climate, location, and socioeconomic factors. We assigned subjects with 16 years of follow-up in the Canadian Census Health and Environment Cohort (CanCHEC) to one of seven regions based on spatial synoptic classification (SSC) weather types and examined the interaction of exposure to both fine particulate matter (PM2.5) and ground level ozone and cause of death using survival analysis, while adjusting for socioeconomic characteristics and individual confounders. Correlations between ozone and PM2.5 varied across SSC zones from -0.02 to 0.7. Comparing zones using the most populated SSC zone as a reference, a 10 ppb increase in ozone exposure was associated with increases in hazard ratios (HRs) that ranged from 1.007 (95% CI 0.99, 1.015) to 1.03 (95% CI 1.02, 1.041) for cardiovascular disease, 1.013 (95% CI 0.996, 1.03) to 1.058 (95% CI 1.034, 1.082) for cerebrovascular disease, and 1.02 (95% CI 1.006, 1.034) for ischemic heart disease. HRs remained significant after adjustment for PM2.5. Long term exposure to ozone is related to an increased risk of mortality from cardiovascular and cerebrovascular diseases; the risk varies by location across Canada and is not attenuated by adjustment for PM2.5. This research shows that the SSC can be used to define geographic regions and it demonstrates the importance of accounting for that spatial variability when studying the long term health effects of air pollution. PMID:27131819

  7. Glacial retreat between the Late-Glacial and Early Holocene sequences in the Southern French Alps : definition of an accurate pattern by new Cosmic Ray Exposure ages.

    NASA Astrophysics Data System (ADS)

    Cossart, Etienne; Fort, Monique; Bourlès, Didier; Braucher, Régis; Carcaillet, Julien; Perrier, Romain; Siame, Lionel; Gribenski, Natacha

    2010-05-01

    The Southern French Alps, characterized by many climatic influences (oceanic, continental and mediterranean), remain a scientific problem for palaeo-environmental studies. Indeed, the lack of chronological benchmarks hitherto hampered the definition of sequences of glacier variations since the Last Glacial Maximum (LGM), even if a scenario was based upon an extensive fieldwork realized in the Ubaye valley. This scenario was then considered as a regional model by many geomorphologists, but this valley is not necessarily representative of the entire region. Firstly, this valley is the driest area within the Southern French Alps due the sheltering effect of relief against humid fluxes. Secondly, topography (altitudes, slopes and shapes) of the upper part of watersheds are not particularly prone to snow accumulation into the cirques. The established scenario is as follows. Glaciers shrank and decayed between the LGM and the Late-Glacial periods and glaciers were restricted in cirques areas during the Late-Glacial and Holocene glaciations. We try to discuss this model thanks to geomorphic investigations and new chronological benchmarks acquired in Briançonnais area, in the upper part of Durance watershed. The upper part of the Durance watershed was chosen because it corresponds to the accumulation zone of the main glacier of the Southern French Alps during the LGM. Thanks to extensive fieldwork and geomorphic mapping of remnants of past glaciations, and thanks to new chronological data (about 35 cosmic ray exposure -CRE- ages, acquired in 2004 and 2009) we propose here the first absolute scenario established in the very upper part of the catchment. To assess CRE ages, we sampled glacially-polished surfaces, along both longitudinal and transverse valley cross-sections, in order to assess both the retreat of the front and the thinning rate of the glacial tongue. We also paid attention to knobs located at the outlet of glacial cirques, and some morainic ridges. The

  8. Photometric brown-dwarf classification. II. A homogeneous sample of 1361 L and T dwarfs brighter than J = 17.5 with accurate spectral types

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.

    2016-04-01

    We present a homogeneous sample of 1361 L and T dwarfs brighter than J = 17.5 (of which 998 are new), from an effective area of 3070 deg2, classified by the photo-type method to an accuracy of one spectral sub-type using izYJHKW1W2 photometry from SDSS+UKIDSS+WISE. Other than a small bias in the early L types, the sample is shown to be effectively complete to the magnitude limit, for all spectral types L0 to T8. The nature of the bias is an incompleteness estimated at 3% because peculiar blue L dwarfs of type L4 and earlier are classified late M. There is a corresponding overcompleteness because peculiar red (likely young) late M dwarfs are classified early L. Contamination of the sample is confirmed to be small: so far spectroscopy has been obtained for 19 sources in the catalogue and all are confirmed to be ultracool dwarfs. We provide coordinates and izYJHKW1W2 photometry of all sources. We identify an apparent discontinuity, Δm ~ 0.4 mag, in the Y - K colour between spectral types L7 and L8. We present near-infrared spectra of nine sources identified by photo-type as peculiar, including a new low-gravity source ULAS J005505.68+013436.0, with spectroscopic classification L2γ. We provide revised izYJHKW1W2 template colours for late M dwarfs, types M7 to M9. The catalogue is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/589/A49

  9. Classification Options

    ERIC Educational Resources Information Center

    Exceptional Children, 1978

    1978-01-01

    The interview presents opinions of Nicholas Hobbs on the classification of exceptional children, including topics such as ecologically oriented classification systems, the role of parents, and need for revision of teacher preparation programs. (IM)

  10. Endodontic classification.

    PubMed

    Morse, D R; Seltzer, S; Sinai, I; Biron, G

    1977-04-01

    Clinical and histopathologic findings are mixed in current endodontic classifications. A new system, based on symptomatology, may be more useful in clincial practice. The classifications are vital asymptomatic, hypersensitive dentin, inflamed-reversible, inflamed/dengenerating without area-irreversible, inflamed/degenerating with area-irreversible, necrotic without area, and necrotic with area. PMID:265327

  11. Short Time Exposure (STE) test in conjunction with Bovine Corneal Opacity and Permeability (BCOP) assay including histopathology to evaluate correspondence with the Globally Harmonized System (GHS) eye irritation classification of textile dyes.

    PubMed

    Oliveira, Gisele Augusto Rodrigues; Ducas, Rafael do Nascimento; Teixeira, Gabriel Campos; Batista, Aline Carvalho; Oliveira, Danielle Palma; Valadares, Marize Campos

    2015-09-01

    Eye irritation evaluation is mandatory for predicting health risks in consumers exposed to textile dyes. The two dyes, Reactive Orange 16 (RO16) and Reactive Green 19 (RG19) are classified as Category 2A (irritating to eyes) based on the UN Globally Harmonized System for classification (UN GHS), according to the Draize test. On the other hand, animal welfare considerations and the enforcement of a new regulation in the EU are drawing much attention in reducing or replacing animal experiments with alternative methods. This study evaluated the eye irritation of the two dyes RO16 and RG19 by combining the Short Time Exposure (STE) and the Bovine Corneal Opacity and Permeability (BCOP) assays and then comparing them with in vivo data from the GHS classification. The STE test (first level screening) categorized both dyes as GHS Category 1 (severe irritant). In the BCOP, dye RG19 was also classified as GHS Category 1 while dye RO16 was classified as GHS no prediction can be made. Both dyes caused damage to the corneal tissue as confirmed by histopathological analysis. Our findings demonstrated that the STE test did not contribute to arriving at a better conclusion about the eye irritation potential of the dyes when used in conjunction with the BCOP test. Adding the histopathology to the BCOP test could be an appropriate tool for a more meaningful prediction of the eye irritation potential of dyes. PMID:26026500

  12. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  13. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  14. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  15. Allergic reactions following contrast material administration: nomenclature, classification, and mechanisms.

    PubMed

    Palmiere, Cristian; Comment, Lionel; Mangin, Patrice

    2014-01-01

    In forensic pathology routine, fatal cases of contrast agent exposure can be occasionally encountered. In such situations, beyond the difficulties inherent in establishing the cause of death due to nonspecific or absent autopsy and histology findings as well as limited laboratory investigations, pathologists may face other problems in formulating exhaustive, complete reports, and conclusions that are scientifically accurate. Indeed, terminology concerning adverse drug reactions and allergy nomenclature is confusing. Some terms, still utilized in forensic and radiological reports, are outdated and should be avoided. Additionally, not all forensic pathologists master contrast material classification and pathogenesis of contrast agent reactions. We present a review of the literature covering allergic reactions to contrast material exposure in order to update used terminology, explain the pathophysiology, and list currently available laboratory investigations for diagnosis in the forensic setting. PMID:24061700

  16. Remote sensing (normalized difference vegetation index) classification of risk versus minimal risk habitats for human exposure to Ixodes pacificus (Acari: Ixodidae) nymphs in Mendocino County, California.

    PubMed

    Eisen, Rebecca J; Eisen, Lars; Lane, Robert S

    2005-01-01

    In California, Ixodes pacificus Cooley & Kohls nymphs have been implicated as the primary bridging vectors to humans of the spirochetal bacterium causing Lyme disease (Borrelia burgdorferi). Because the nymphs typically do not ascend emergent vegetation, risk of human exposure is minimal in grasslands, chaparral, and woodland-grass. Instead, woodlands with a ground cover dominated by leaf litter (hereinafter referred to as woodland-leaf) have emerged as a primary risk habitat for exposure to B. burgdorferi-infected nymphs. As a means of differentiating woodland-leaf habitats from others with minimal risk (e.g., chaparral, grassland, and woodland-grass), we constructed a maximum likelihood model of these habitat types within a 7,711-ha area in southeastern Mendocino County based on the normalized difference vegetation index derived from Landsat 5 Thematic Mapper imagery (based on a 30 by 30-m pixel size) over four seasons. The overall accuracy of the model to discriminate woodland-leaf, woodland-grass, open grassland, and chaparral was 83.85% (Kappa coefficient of 0.78). Validation of the accuracy of the model to classify woodland-leaf yielded high values both for producer accuracy (93.33% of validated woodland-leaf pixels correctly classified by the model) and user accuracy (96.55% of model-classified validation pixels correctly categorized as woodland-leaf). Woodland-leaf habitats were found to be highly aggregated within the examined area. In conclusion, our model successfully used remotely sensed data as a predictor of habitats where humans are at risk for Lyme disease in the far-western United States. PMID:15691012

  17. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  18. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  19. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  20. AN INDUSTRIAL HYGIENE SAMPLING STRATEGY TO QUANTIFY EMPLOYEE EXPOSURE

    SciTech Connect

    Thompson, Aaron L.; Hylko, James M.

    2003-02-27

    Depending on the invasive nature of performing waste management activities, excessive concentrations of mists, vapors, gases, dusts or fumes may be present thus creating hazards to the employee from either inhalation into the lungs or absorption through the skin. To address these hazards, similar exposure groups and an exposure profile result consisting of: (1) a hazard index (concentration); (2) an exposure rating (monitoring results or exposure probabilities); and (3) a frequency rating (hours of potential exposure per week) are used to assign an exposure risk rating (ERR). The ERR determines if the potential hazards pose significant risks to employees linking potential exposure and breathing zone (BZ) monitoring requirements. Three case studies consisting of: (1) a hazard-task approach; (2) a hazard-job classification-task approach; and (3) a hazard approach demonstrate how to conduct exposure assessments using this methodology. Environment, safety and health professionals can then categorize levels of risk and evaluate the need for BZ monitoring, thereby quantifying employee exposure levels accurately.

  1. Form classification

    NASA Astrophysics Data System (ADS)

    Reddy, K. V. Umamaheswara; Govindaraju, Venu

    2008-01-01

    The problem of form classification is to assign a single-page form image to one of a set of predefined form types or classes. We classify the form images using low level pixel density information from the binary images of the documents. In this paper, we solve the form classification problem with a classifier based on the k-means algorithm, supported by adaptive boosting. Our classification method is tested on the NIST scanned tax forms data bases (special forms databases 2 and 6) which include machine-typed and handwritten documents. Our method improves the performance over published results on the same databases, while still using a simple set of image features.

  2. The Influence of Second-Hand Cigarette Smoke Exposure during Childhood and Active Cigarette Smoking on Crohn’s Disease Phenotype Defined by the Montreal Classification Scheme in a Western Cape Population, South Africa

    PubMed Central

    Chivese, Tawanda; Esterhuizen, Tonya M.; Basson, Abigail Raffner

    2015-01-01

    Background Smoking may worsen the disease outcomes in patients with Crohn’s disease (CD), however the effect of exposure to second-hand cigarette smoke during childhood is unclear. In South Africa, no such literature exists. The aim of this study was to investigate whether disease phenotype, at time of diagnosis of CD, was associated with exposure to second-hand cigarette during childhood and active cigarette smoking habits. Methods A cross sectional examination of all consecutive CD patients seen during the period September 2011-January 2013 at 2 large inflammatory bowel disease centers in the Western Cape, South Africa was performed. Data were collected via review of patient case notes, interviewer-administered questionnaire and clinical examination by the attending gastroenterologist. Disease phenotype (behavior and location) was evaluated at time of diagnosis, according to the Montreal Classification scheme. In addition, disease behavior was stratified as ‘complicated’ or ‘uncomplicated’, using predefined definitions. Passive cigarette smoke exposure was evaluated during 3 age intervals: 0–5, 6–10, and 11–18 years. Results One hundred and ninety four CD patients were identified. Cigarette smoking during the 6 months prior to, or at time of diagnosis was significantly associated with ileo-colonic (L3) disease (RRR = 3.63; 95%CI, 1.32–9.98, p = 0.012) and ileal (L1) disease (RRR = 3.54; 95%CI, 1.06–11.83, p = 0.040) compared with colonic disease. In smokers, childhood passive cigarette smoke exposure during the 0–5 years age interval was significantly associated with ileo-colonic CD location (RRR = 21.3; 95%CI, 1.16–391.55, p = 0.040). No significant association between smoking habits and disease behavior at diagnosis, whether defined by the Montreal scheme, or stratified as ‘complicated’ vs ‘uncomplicated’, was observed. Conclusion Smoking habits were associated with ileo-colonic (L3) and ileal (L1) disease at time of diagnosis in

  3. Learning classification trees

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1991-01-01

    Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. How a tree learning algorithm can be derived from Bayesian decision theory is outlined. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule turns out to be similar to Quinlan's information gain splitting rule, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 and Breiman et al. Cart show the full Bayesian algorithm is consistently as good, or more accurate than these other approaches though at a computational price.

  4. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  5. Classifying Classification

    ERIC Educational Resources Information Center

    Novakowski, Janice

    2009-01-01

    This article describes the experience of a group of first-grade teachers as they tackled the science process of classification, a targeted learning objective for the first grade. While the two-year process was not easy and required teachers to teach in a new, more investigation-oriented way, the benefits were great. The project helped teachers and…

  6. Neuromuscular disease classification system

    NASA Astrophysics Data System (ADS)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  7. Multisensor classification of sedimentary rocks

    NASA Technical Reports Server (NTRS)

    Evans, Diane

    1988-01-01

    A comparison is made between linear discriminant analysis and supervised classification results based on signatures from the Landsat TM, the Thermal Infrared Multispectral Scanner (TIMS), and airborne SAR, alone and combined into extended spectral signatures for seven sedimentary rock units exposed on the margin of the Wind River Basin, Wyoming. Results from a linear discriminant analysis showed that training-area classification accuracies based on the multisensor data were improved an average of 15 percent over TM alone, 24 percent over TIMS alone, and 46 percent over SAR alone, with similar improvement resulting when supervised multisensor classification maps were compared to supervised, individual sensor classification maps. When training area signatures were used to map spectrally similar materials in an adjacent area, the average classification accuracy improved 19 percent using the multisensor data over TM alone, 2 percent over TIMS alone, and 11 percent over SAR alone. It is concluded that certain sedimentary lithologies may be accurately mapped using a single sensor, but classification of a variety of rock types can be improved using multisensor data sets that are sensitive to different characteristics such as mineralogy and surface roughness.

  8. 78 FR 33633 - Human Exposure to Radiofrequency Electromagnetic Fields

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ..., including labeling and other requirements for occupational exposure classification, clarification of... approval. 3. Pinna (Outer Ear) Classification as an Extremity 11. In the NPRM, the Commission requested... classification, and it amends Sec. 1.1310 of its rules to subject the pinnae to the same RF exposure...

  9. Classification in Australia.

    ERIC Educational Resources Information Center

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  10. Classification and knowledge

    NASA Technical Reports Server (NTRS)

    Kurtz, Michael J.

    1989-01-01

    Automated procedures to classify objects are discussed. The classification problem is reviewed, and the relation of epistemology and classification is considered. The classification of stellar spectra and of resolved images of galaxies is addressed.

  11. Remote Sensing Information Classification

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas L.

    2008-01-01

    This viewgraph presentation reviews the classification of Remote Sensing data in relation to epidemiology. Classification is a way to reduce the dimensionality and precision to something a human can understand. Classification changes SCALAR data into NOMINAL data.

  12. Soil classifications systems review. Final report

    SciTech Connect

    1997-11-01

    Systems used to classify soils are discussed and compared. Major types of classification systems that are reviewed include natural systems, technical systems, the FAO/UNESCO world soil map, soil survey map units, and numerical taxonomy. Natural Classification systems discussed in detail are the United States system, Soil Taxonomy, and the Russian and Canadian systems. Included in the section on technical classification systems are reviews on the AASHO and Unified (ASTM) classification systems. The review of soil classification systems was conducted to establish improved availability of accurate ground thermal conductivity and other heat transfer related properties information. These data are intended to help in the design of closed-loop ground heat exchange systems.

  13. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  14. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  15. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  16. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  17. Evaluation of rock mass classification schemes: a case study from the Bowen Basin, Australia

    NASA Astrophysics Data System (ADS)

    Brook, Martin; Hebblewhite, Bruce; Mitra, Rudrajit

    2016-04-01

    The development of an accurate engineering geological model and adequate knowledge of spatial variation in rock mass conditions are important prerequisites for slope stability analyses, tunnel design, mine planning and risk management. Rock mass classification schemes such as Rock Mass Rating (RMR), Coal Mine Roof Rating (CMRR), Q-system and Roof Strength Index (RSI) have been used for a range of engineering geological applications, including transport tunnels, "hard rock" mining and underground and open-cut coal mines. Often, rock mass classification schemes have been evaluated on subaerial exposures, where weathering has affected joint characteristics and intact strength. In contrast, the focus of this evaluation of the above classification schemes is an underground coal mine in the Bowen Basin, central Queensland, Australia, 15 km east of the town of Moranbah. Rock mass classification was undertaken at 68 sites across the mine. Both the target coal seam and overlying rock show marked spatial variability in terms of RMR, CMRR and Q, but RSI showed limited sensitivity to changes in rock mass condition. Relationships were developed between different parameters with varying degrees of success. A mine-wide analysis of faulting was undertaken, and compared with in situ stress field and local-scale measurements of joint and cleat. While there are no unequivocal relationships between rock mass classification parameters and faulting, a central graben zone shows heterogeneous rock mass properties. The corollary is that if geological features can be accurately defined by remote sensing technologies, then this can assist in predicting rock mass conditions and risk management ahead of development and construction.

  18. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  19. Hyperspectral image classification for mapping agricultural tillage practices

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An efficient classification framework for mapping agricultural tillage practice using hyperspectral remote sensing imagery is proposed, which has the potential to be implemented practically to provide rapid, accurate, and objective surveying data for precision agricultural management and appraisal f...

  20. Terrain classification for a UGV

    NASA Astrophysics Data System (ADS)

    Sarwal, Alok; Baker, Chris; Rosenblum, Mark

    2005-05-01

    This work addresses the issue of Terrain Classification that can be applied for path planning for an Unmanned Ground Vehicle (UGV) platform. We are interested in classification of features such as rocks, bushes, trees and dirt roads. Currently, the data is acquired from a color camera mounted on the UGV as we can add range data from a second sensor in the future. The classification is accomplished by first, coarse segmenting a frame and then refining the initial segmentations through a convenient user interface. After the first frame, temporal information is exploited to improve the quality of the image segmentation and help classification adapt to changes due to ambient lighting, shadows, and scene changes as the platform moves. The Mean Shift Classifier algorithm provides segmentation of the current frame data. We have tested the above algorithms with four sequence of frames acquired in an environment with terrain representative of the type we expect to see in the field. A comparison of the results from this algorithm was done with accurate manually-segmented (ground-truth) data, for each frame in the sequence.

  1. Supervised Machine Learning for Classification of the Electrophysiological Effects of Chronotropic Drugs on Human Induced Pluripotent Stem Cell-Derived Cardiomyocytes

    PubMed Central

    Heylman, Christopher; Datta, Rupsa; Sobrino, Agua

    2015-01-01

    Supervised machine learning can be used to predict which drugs human cardiomyocytes have been exposed to. Using electrophysiological data collected from human cardiomyocytes with known exposure to different drugs, a supervised machine learning algorithm can be trained to recognize and classify cells that have been exposed to an unknown drug. Furthermore, the learning algorithm provides information on the relative contribution of each data parameter to the overall classification. Probabilities and confidence in the accuracy of each classification may also be determined by the algorithm. In this study, the electrophysiological effects of β–adrenergic drugs, propranolol and isoproterenol, on cardiomyocytes derived from human induced pluripotent stem cells (hiPS-CM) were assessed. The electrophysiological data were collected using high temporal resolution 2-photon microscopy of voltage sensitive dyes as a reporter of membrane voltage. The results demonstrate the ability of our algorithm to accurately assess, classify, and predict hiPS-CM membrane depolarization following exposure to chronotropic drugs. PMID:26695765

  2. CHILDREN'S DIETARY EXPOSURES TO CHEMICAL CONTAMINANTS

    EPA Science Inventory

    The Food Quality Protection Act of 1996 requires EPA to more accurately assess children's aggregate exposures to environmental contaminants. Children have unstructured eating behaviors which cause excess exposures as a result of their activities. Determining total dietary intak...

  3. Nanotechnology and Exposure Science

    PubMed Central

    LIOY, PAUL J.; NAZARENKO, YEVGEN; HAN, TAE WON; LIOY, MARY JEAN; MAINELIS, GEDIMINAS

    2014-01-01

    This article discusses the gaps in our understanding of human exposures to nanoparticles stemming from the use of nanotechnology-based consumer products by the general public. It also describes a series of steps that could be taken to characterize such exposures. The suggested steps include classification of the nanotechnology-based products, simulation of realistic exposure patterns, characterization of emissions, analysis of the duration of activities resulting in exposures, and consideration of the bioaccessibility of nanoparticles. In addition, we present a preliminary study with nanotechnology-based cosmetic powders where particle release was studied under realistic powder application conditions. The data demonstrated that when nanotechnology-based cosmetic powders were used, there was a potential for inhaling airborne particles ranging in size from tens of nanometers to tens of micrometers. PMID:21222382

  4. Classification and designation of emissions

    NASA Astrophysics Data System (ADS)

    Luther, W. A.

    1981-08-01

    The world's community of frequency administrators has been able to reach agreement on a modern, useful method of designating emissions (transmissions) according to their necessary bandwidth and their classification. The method will become effective on January 1, 1982. With the new system 480 times as many emissions can be accurately classified than with the old. It is believed that now the optimum method has been found. The new method should be the easiest for all administrations to adopt, while providing the accuracy of designation needed in today's state of the technology.

  5. New Classification of Headache

    PubMed Central

    Gawel, Marek J.

    1992-01-01

    The Headache Classification Committee of the International Headache Society has developed a new classification system for headache, cranial neuralgia, and facial pain. The value of the classification for the practising clinician is that it forces him or her to take a more careful history in order to determine the nature of the headache. This article reviews the classification system and gives examples of case histories and subsequent diagnoses. PMID:21221276

  6. Classification of articulators.

    PubMed

    Rihani, A

    1980-03-01

    A simple classification in familiar terms with definite, clear characteristics can be adopted. This classification system is based on the number of records used and the adjustments necessary for the articulator to accept these records. The classification divides the articulators into nonadjustable, semiadjustable, and fully adjustable articulators (Table I). PMID:6928204

  7. Government Classification: An Overview.

    ERIC Educational Resources Information Center

    Brown, Karen M.

    Classification of government documents (confidential, secret, top secret) is a system used by the executive branch to, in part, protect national security and foreign policy interests. The systematic use of classification markings with precise definitions was established during World War I, and since 1936 major changes in classification have…

  8. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  9. Cirrhosis Classification Based on Texture Classification of Random Features

    PubMed Central

    Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM. PMID:24707317

  10. Classification: Something to Think About.

    ERIC Educational Resources Information Center

    Isenberg, Joan P.; Jacobs, Judith E.

    1981-01-01

    Advocates the use of classification activities in the elementary school curriculum as a means of developing thinking skills in children. Critical preclassification skills, classification activities (including simple and multiple classification), and classification tasks and materials are discussed. (Author/RH)

  11. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  12. Hydrometor classification from 2 dimensional videodisdrometer data

    NASA Astrophysics Data System (ADS)

    Grazioli, J.; Tuia, D.; Monhart, S.; Schneebeli, M.; Raupach, T.; Berne, A.

    2014-02-01

    This paper presents a hydrometeor classification technique based on two-dimensional video disdrometer (2DVD) data. The method provides an estimate of the dominant hydrometeor type falling over time intervals of 60 s during precipitation, using as input the statistical behavior of a set of particle descriptors, calculated for each particle image. The employed supervised algorithm is a support vector machine (SVM), trained over precipitation time steps labeled by visual inspection. In this way, 8 dominant hydrometeor classes could be discriminated. The algorithm achieves accurate classification performances, with median overall accuracies (Cohen's K) of 90% (0.88), and with accuracies higher than 84% for each hydrometeor class.

  13. Acute pancreatitis: international classification and nomenclature.

    PubMed

    Bollen, T L

    2016-02-01

    The incidence of acute pancreatitis (AP) is increasing and it is associated with a major healthcare concern. New insights in the pathophysiology, better imaging techniques, and novel treatment options for complicated AP prompted the update of the 1992 Atlanta Classification. Updated nomenclature for pancreatic collections based on imaging criteria is proposed. Adoption of the newly Revised Classification of Acute Pancreatitis 2012 by radiologists should help standardise reports and facilitate accurate conveyance of relevant findings to referring physicians involved in the care of patients with AP. This review will clarify the nomenclature of pancreatic collections in the setting of AP. PMID:26602933

  14. Post-Mortem Evaluation of Amyloid-Dopamine Terminal PET Dementia Classifications

    PubMed Central

    Albin, Roger L.; Fisher-Hubbard, Amanda; Shanmugasundaram, Krithika; Koeppe, Robert A.; Burke, James F.; Camelo-Piragua, Sandra; Lieberman, Andrew P.; Giordani, Bruno; Frey, Kirk A.

    2016-01-01

    Clinical classification of early dementia and mild cognitive impairment is imprecise. We reported previously that molecular imaging classification of early dementia and mild cognitive impairment with dual amyloid and dopamine terminal positron emission tomography differs significantly from expert clinical classification. We now report pathologic diagnoses in a substantial subset of our previously imaged subjects. Among 36 subjects coming to autopsy, imaging classifications and pathologic diagnosis were concordant in 33 cases (κ=0.85). This approach enhanced specificity of Alzheimer disease diagnosis. The strong concordance of imaging based classifications and pathologic diagnoses suggests that this imaging approach will be useful in establishing more accurate and convenient classification biomarkers for dementia research. PMID:26183692

  15. Automatic classification of blank substrate defects

    NASA Astrophysics Data System (ADS)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  16. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  17. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  18. Hierarchical image classification in the bioscience literature.

    PubMed

    Kim, Daehyun; Yu, Hong

    2009-01-01

    Our previous work has shown that images appearing in bioscience articles can be classified into five types: Gel-Image, Image-of-Thing, Graph, Model, and Mix. For this paper, we explored and analyzed features strongly associated with each image type and developed a hierarchical image classification approach for classifying an image into one of the five types. First, we applied texture features to separate images into two groups: 1) a texture group comprising Gel Image, Image-of-Thing, and Mix, and 2) a non-texture group comprising Graph and Model. We then applied entropy, skewness, and uniformity for the first group, and edge difference, uniformity, and smoothness for the second group to classify images into specific types. Our results show that hierarchical image classification accurately divided images into the two groups during the initial classification and that the overall accuracy of the image classification was higher than that of our previous approach. In particular, the recall of hierarchical image classification was greatly improved due to the high accuracy of the initial classification. PMID:20351874

  19. Deriving exposure limits

    NASA Astrophysics Data System (ADS)

    Sliney, David H.

    1990-07-01

    Historically many different agencies and standards organizations have proposed laser occupational exposure limits (EL1s) or maximum permissible exposure (MPE) levels. Although some safety standards have been limited in scope to manufacturer system safety performance standards or to codes of practice most have included occupational EL''s. Initially in the 1960''s attention was drawn to setting EL''s however as greater experience accumulated in the use of lasers and some accident experience had been gained safety procedures were developed. It became clear by 1971 after the first decade of laser use that detailed hazard evaluation of each laser environment was too complex for most users and a scheme of hazard classification evolved. Today most countries follow a scheme of four major hazard classifications as defined in Document WS 825 of the International Electrotechnical Commission (IEC). The classifications and the associated accessible emission limits (AEL''s) were based upon the EL''s. The EL and AEL values today are in surprisingly good agreement worldwide. There exists a greater range of safety requirements for the user for each class of laser. The current MPE''s (i. e. EL''s) and their basis are highlighted in this presentation. 2. 0

  20. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  1. Security classification of information

    SciTech Connect

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  2. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  3. Classification images: A review.

    PubMed

    Murray, Richard F

    2011-01-01

    Classification images have recently become a widely used tool in visual psychophysics. Here, I review the development of classification image methods over the past fifteen years. I provide some historical background, describing how classification images and related methods grew out of established statistical and mathematical frameworks and became common tools for studying biological systems. I describe key developments in classification image methods: use of optimal weighted sums based on the linear observer model, formulation of classification images in terms of the generalized linear model, development of statistical tests, use of priors to reduce dimensionality, methods for experiments with more than two response alternatives, a variant using multiplicative noise, and related methods for examining nonlinearities in visual processing, including second-order Volterra kernels and principal component analysis. I conclude with a selective review of how classification image methods have led to substantive findings in three representative areas of vision research, namely, spatial vision, perceptual organization, and visual search. PMID:21536726

  4. Radiation exposure and pregnancy.

    PubMed

    Labant, Amy; Silva, Christina

    2014-01-01

    Radiological exposure from nuclear power reactor accidents, transportation of nuclear waste accidents, industrial accidents, or terrorist activity may be a remote possibility, but it could happen. Nurses must be prepared to evaluate and treat pregnant women and infants who have been exposed to radiation, and to have an understanding of the health consequences of a nuclear or radiological incident. Pregnant women and infants are a special group of patients who need consideration when exposed to radiation. Initial care requires thorough assessment and decisions regarding immediate care needs. Ongoing care is based on type and extent of radiation exposure. With accurate, comprehensive information and education, nurses will be better prepared to help mitigate the effects of radiation exposure to pregnant women and infants following a radiological incident. Information about radiation, health effects of prenatal radiation exposure, assessment, patient care, and treatment of pregnant women and infants are presented. PMID:25333800

  5. Classification of Pulmonary Hypertension.

    PubMed

    Oudiz, Ronald J

    2016-08-01

    The classification of pulmonary hypertension (PH) is an attempt to define subtypes of PH based on clinical presentation, underlying physiology, and treatment implications. Five groups of PH have been defined, and the classification scheme has been refined over the years to guide clinicians in the diagnosis and management of PH. Understanding the classification of PH is paramount before embarking on a work-up of patients with PH or suspected PH because treatment and outcome can vary greatly. PMID:27443133

  6. Classiology and soil classification

    NASA Astrophysics Data System (ADS)

    Rozhkov, V. A.

    2012-03-01

    Classiology can be defined as a science studying the principles and rules of classification of objects of any nature. The development of the theory of classification and the particular methods for classifying objects are the main challenges of classiology; to a certain extent, they are close to the challenges of pattern recognition. The methodology of classiology integrates a wide range of methods and approaches: from expert judgment to formal logic, multivariate statistics, and informatics. Soil classification assumes generalization of available data and practical experience, formalization of our notions about soils, and their representation in the form of an information system. As an information system, soil classification is designed to predict the maximum number of a soil's properties from the position of this soil in the classification space. The existing soil classification systems do not completely satisfy the principles of classiology. The violation of logical basis, poor structuring, low integrity, and inadequate level of formalization make these systems verbal schemes rather than classification systems sensu stricto. The concept of classification as listing (enumeration) of objects makes it possible to introduce the notion of the information base of classification. For soil objects, this is the database of soil indices (properties) that might be applied for generating target-oriented soil classification system. Mathematical methods enlarge the prognostic capacity of classification systems; they can be applied to assess the quality of these systems and to recognize new soil objects to be included in the existing systems. The application of particular principles and rules of classiology for soil classification purposes is discussed in this paper.

  7. Classification of Stellar Spectra

    NASA Astrophysics Data System (ADS)

    Garrison, R.; Murdin, P.

    2000-11-01

    How does a scientist approach the problem of trying to understand countless billions of objects? One of the first steps is to organize the data and set up a classification scheme which can provide the best insights into the nature of the objects. Perception and insight are the main purposes of classification. In astronomy, where there are `billions and billions' of stars, classification is an ong...

  8. AGN Zoo and Classifications of Active Galaxies

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg M.

    2015-07-01

    We review the variety of Active Galactic Nuclei (AGN) classes (so-called "AGN zoo") and classification schemes of galaxies by activity types based on their optical emission-line spectrum, as well as other parameters and other than optical wavelength ranges. A historical overview of discoveries of various types of active galaxies is given, including Seyfert galaxies, radio galaxies, QSOs, BL Lacertae objects, Starbursts, LINERs, etc. Various kinds of AGN diagnostics are discussed. All known AGN types and subtypes are presented and described to have a homogeneous classification scheme based on the optical emission-line spectra and in many cases, also other parameters. Problems connected with accurate classifications and open questions related to AGN and their classes are discussed and summarized.

  9. Classification of Itch.

    PubMed

    Ständer, Sonja

    2016-01-01

    Chronic pruritus has diverse forms of presentation and can appear not only on normal skin [International Forum for the Study of Itch (IFSI) classification group II], but also in the company of dermatoses (IFSI classification group I). Scratching, a natural reflex, begins in response to itch. Enough damage can be done to the skin by scratching to cause changes in the primary clinical picture, often leading to a clinical picture predominated by the development of chronic scratch lesions (IFSI classification group III). An internationally recognized, standardized classification system was created by the IFSI to not only aid in clarifying terms and definitions, but also to harmonize the global nomenclature for itch. PMID:27578063

  10. Automatic lexical classification: bridging research and practice.

    PubMed

    Korhonen, Anna

    2010-08-13

    Natural language processing (NLP)--the automatic analysis, understanding and generation of human language by computers--is vitally dependent on accurate knowledge about words. Because words change their behaviour between text types, domains and sub-languages, a fully accurate static lexical resource (e.g. a dictionary, word classification) is unattainable. Researchers are now developing techniques that could be used to automatically acquire or update lexical resources from textual data. If successful, the automatic approach could considerably enhance the accuracy and portability of language technologies, such as machine translation, text mining and summarization. This paper reviews the recent and on-going research in automatic lexical acquisition. Focusing on lexical classification, it discusses the many challenges that still need to be met before the approach can benefit NLP on a large scale. PMID:20603372

  11. Linkage of the California Pesticide Use Reporting Database with Spatial Land Use Data for Exposure Assessment

    PubMed Central

    Nuckols, John R.; Gunier, Robert B.; Riggs, Philip; Miller, Ryan; Reynolds, Peggy; Ward, Mary H.

    2007-01-01

    Background The State of California maintains a comprehensive Pesticide Use Reporting Database (CPUR). The California Department of Water Resources (CDWR) maps all crops in agricultural counties in California about once every 5 years. Objective We integrated crop maps with CPUR to more accurately locate where pesticides are applied and evaluated the effects for exposure assessment. Methods We mapped 577 residences and used the CPUR and CDWR data to compute two exposure metrics based on putative pesticide use within a 500-m buffer. For the CPUR metric, we assigned pesticide exposure to the residence proportionally for all square-mile Sections that intersected the buffer. For the CDWR metric, we linked CPUR crop-specific pesticide use to crops mapped within the buffer and assigned pesticide exposure. We compared the metrics for six pesticides: simazine, trifluralin (herbicides), dicofol, propargite (insecticides), methyl bromide, and metam sodium (fumigants). Results For all six pesticides we found good agreement (88–98%) as to whether the pesticide use was predicted. When we restricted the analysis to residences with reported pesticide use in Sections within 500 m, agreement was greatly reduced (35–58%). The CPUR metric estimates of pesticide use within 500 m were significantly higher than the CDWR metric for all six pesticides. Conclusions Our findings may have important implications for exposure classification in epidemiologic studies of agricultural pesticide use using CPUR. There is a need to conduct environmental and biological measurements to ascertain which, if any, of these metrics best represent exposure. PMID:17520053

  12. Library Classification 2020

    ERIC Educational Resources Information Center

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  13. Linear Classification Functions.

    ERIC Educational Resources Information Center

    Huberty, Carl J.; Smith, Jerry D.

    Linear classification functions (LCFs) arise in a predictive discriminant analysis for the purpose of classifying experimental units into criterion groups. The relative contribution of the response variables to classification accuracy may be based on LCF-variable correlations for each group. It is proved that, if the raw response measures are…

  14. Improving classification of psychoses.

    PubMed

    Lawrie, Stephen M; O'Donovan, Michael C; Saks, Elyn; Burns, Tom; Lieberman, Jeffrey A

    2016-04-01

    Psychosis has been recognised as an abnormal state in need of care throughout history and by diverse cultures. Present classifications of psychotic disorder remain based on the presence of specific psychotic symptoms, relative to affective and other symptoms, and their sequence and duration. Although extant diagnostic classifications have restricted validity, they have proven reliability and most clinicians and some patients find them useful. Moreover, these classifications have yet to be replaced by anything better. We propose that an expansion of the subgrouping approach inherent to classification will provide incremental improvement to present diagnostic constructs-as has worked in the rest of medicine. We also propose that subgroups could be created both within and across present diagnostic classifications, taking into consideration the potential value of continuous measures (eg, duration of psychotic symptoms and intelligence quotient). Health-care workers also need to work with service users and carers to develop and adapt approaches to diagnosis that are seen as helpful. PMID:27063387

  15. 2-Stage Classification Modeling

    1994-11-01

    CIRCUIT2.4 is used to design optimum two-stage classification configurations and operating conditions for energy conservation. It permits simulation of five basic grinding-classification circuits, including one single-stage and four two-stage classification arrangements. Hydrocyclones, spiral classifiers, and sieve band screens can be simulated, and the user may choose the combination of devices for the flowsheet simulation. In addition, the user may select from four classification modeling methods to achieve the goals of a simulation project using themore » most familiar concepts. Circuit performance is modeled based on classification parameters or equipment operating conditions. A modular approach was taken in designing the program, which allows future addition of other models with relatively minor changes.« less

  16. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  17. Military Exposures

    MedlinePlus

    ... Index Agent Orange Agent Orange Home Facts about Herbicides Veterans' Diseases Birth Defects Benefits Exposure Locations Provider ... for Providers Diagnosis and Treatment of Exposure Health Effects More Provider Resources » return to top Get Email ...

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. Relict silicate inclusions in extraterrestrial chromite and their use in the classification of fossil chondritic material

    NASA Astrophysics Data System (ADS)

    Alwmark, Carl; Schmitz, Birger

    2009-03-01

    Chromite is the only common meteoritic mineral surviving long-term exposure on Earth, however, the present study of relict chromite from numerous Ordovician (470 Ma) fossil meteorites and micrometeorites from Sweden, reveals that when encapsulated in chromite, other minerals can survive for hundreds of millions of years maintaining their primary composition. The most common minerals identified, in the form of small (<1-10 μm) anhedral inclusions, are olivine and pyroxene. In addition, sporadic merrillite and plagioclase were found. Analyses of recent meteorites, holding both inclusions in chromite and corresponding matrix minerals, show that for olivine and pyroxene inclusions, sub-solidus re-equilibration between inclusion and host chromite during entrapment has led to an increase in chromium in the former. In the case of olivine, the re-equilibration has also affected the fayalite (Fa) content, lowering it with an average of 14% in inclusions. For Ca-poor pyroxene the ferrosilite (Fs) content is more or less identical in inclusions and matrix. By these studies an analogue to the commonly applied classification system for ordinary chondritic matrix, based on Fa in olivine and Fs in Ca-poor pyroxene, can be established also for inclusions in chromite. All olivine and Ca-poor pyroxene inclusions (>1.5 μm) in chromite from the Ordovician fossil chondritic material plot within the L-chondrite field, which is in accordance with previous classifications. The concordance in classification together with the fact that inclusions are relatively common makes them an accurate and useful tool in the classification of extraterrestrial material that lacks matrix silicates, such as fossil meteorites and sediment-dispersed chromite grains originating primarily from decomposed micrometeorites but also from larger impacts.

  4. Behavior Based Social Dimensions Extraction for Multi-Label Classification.

    PubMed

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  5. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  6. Intraregional classification of wine via ICP-MS elemental fingerprinting.

    PubMed

    Coetzee, P P; van Jaarsveld, F P; Vanhaecke, F

    2014-12-01

    The feasibility of elemental fingerprinting in the classification of wines according to their provenance vineyard soil was investigated in the relatively small geographical area of a single wine district. Results for the Stellenbosch wine district (Western Cape Wine Region, South Africa), comprising an area of less than 1,000 km(2), suggest that classification of wines from different estates (120 wines from 23 estates) is indeed possible using accurate elemental data and multivariate statistical analysis based on a combination of principal component analysis, cluster analysis, and discriminant analysis. This is the first study to demonstrate the successful classification of wines at estate level in a single wine district in South Africa. The elements B, Ba, Cs, Cu, Mg, Rb, Sr, Tl and Zn were identified as suitable indicators. White and red wines were grouped in separate data sets to allow successful classification of wines. Correlation between wine classification and soil type distributions in the area was observed. PMID:24996361

  7. Hyperspectral Data Classification Using Factor Graphs

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Müller, R.; Palubinskas, G.; Reinartz, P.

    2012-07-01

    Accurate classification of hyperspectral data is still a competitive task and new classification methods are developed to achieve desired tasks of hyperspectral data use. The objective of this paper is to develop a new method for hyperspectral data classification ensuring the classification model properties like transferability, generalization, probabilistic interpretation, etc. While factor graphs (undirected graphical models) are unfortunately not widely employed in remote sensing tasks, these models possess important properties such as representation of complex systems to model estimation/decision making tasks. In this paper we present a new method for hyperspectral data classification using factor graphs. Factor graph (a bipartite graph consisting of variables and factor vertices) allows factorization of a more complex function leading to definition of variables (employed to store input data), latent variables (allow to bridge abstract class to data), and factors (defining prior probabilities for spectral features and abstract classes; input data mapping to spectral features mixture and further bridging of the mixture to an abstract class). Latent variables play an important role by defining two-level mapping of the input spectral features to a class. Configuration (learning) on training data of the model allows calculating a parameter set for the model to bridge the input data to a class. The classification algorithm is as follows. Spectral bands are separately pre-processed (unsupervised clustering is used) to be defined on a finite domain (alphabet) leading to a representation of the data on multinomial distribution. The represented hyperspectral data is used as input evidence (evidence vector is selected pixelwise) in a configured factor graph and an inference is run resulting in the posterior probability. Variational inference (Mean field) allows to obtain plausible results with a low calculation time. Calculating the posterior probability for each class

  8. Refining Landsat classification results using digital terrain data

    USGS Publications Warehouse

    Miller, Wayne A.; Shasby, Mark

    1982-01-01

     Scientists at the U.S. Geological Survey's Earth Resources Observation systems (EROS) Data Center have recently completed two land-cover mapping projects in which digital terrain data were used to refine Landsat classification results. Digital ter rain data were incorporated into the Landsat classification process using two different procedures that required developing decision criteria either subjectively or quantitatively. The subjective procedure was used in a vegetation mapping project in Arizona, and the quantitative procedure was used in a forest-fuels mapping project in Montana. By incorporating digital terrain data into the Landsat classification process, more spatially accurate landcover maps were produced for both projects.

  9. Biomarker selection and classification of "-omics" data using a two-step bayes classification framework.

    PubMed

    Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades

    2013-01-01

    Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time. PMID:24106694

  10. Methodology for hyperspectral image classification using novel neural network

    SciTech Connect

    Subramanian, S., Gat, N., Sheffield, M.,; Barhen, J.; Toomarian, N.

    1997-04-01

    A novel feed forward neural network is used to classify hyperspectral data from the AVIRIS sector. The network applies an alternating direction singular value decomposition technique to achieve rapid training times (few seconds per class). Very few samples (10-12) are required for training. 100% accurate classification is obtained using test data sets. The methodology combines this rapid training neural network together with data reduction and maximal feature separation techniques such as principal component analysis and simultaneous diagonalization of covariance matrices, for rapid and accurate classification of large hyperspectral images. The results are compared to those of standard statistical classifiers. 21 refs., 3 figs., 5 tabs.

  11. Hierarchical Markov random-field modeling for texture classification in chest radiographs

    NASA Astrophysics Data System (ADS)

    Vargas-Voracek, Rene; Floyd, Carey E., Jr.; Nolte, Loren W.; McAdams, Page

    1996-04-01

    A hierarchical Markov random field (MRF) modeling approach is presented for the classification of textures in selected regions of interest (ROIs) of chest radiographs. The procedure integrates possible texture classes and their spatial definition with other components present in an image such as noise and background trend. Classification is performed as a maximum a-posteriori (MAP) estimation of texture class and involves an iterative Gibbs- sampling technique. Two cases are studied: classification of lung parenchyma versus bone and classification of normal lung parenchyma versus miliary tuberculosis (MTB). Accurate classification was obtained for all examined cases showing the potential of the proposed modeling approach for texture analysis of radiographic images.

  12. Improved Surgical Site Infection (SSI) rate through accurately assessed surgical wounds

    PubMed Central

    John, Honeymol; Nimeri, Abdelrahman; ELLAHHAM, SAMER

    2015-01-01

    Sheikh Khalifa Medical City's (SKMC) Surgery Institute was identified as a high outlier in Surgical Site Infections (SSI) based on the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) - Semi-Annual Report (SAR) in January 2012. The aim of this project was to improve SSI rates through accurate wound classification. We identified SSI rate reduction as a performance improvement and safety priority at SKMC, a tertiary referral center. We used the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) best practice guidelines as a guide. ACS NSQIP is a clinical registry that provides risk-adjusted clinical outcome reports every six months. The rates of SSI are reported in an observed/expected ratio. The expected ratio is calculated based on the risk factors of the patients which include wound classification. We established a multidisciplinary SSI taskforce. The members of the SSI taskforce included the ACS NSQIP team members, quality, surgeons, nurses, infection control, IT, pharmacy, microbiology, and it was chaired by a colorectal surgeon. The taskforce focused on five areas: pre-op showering and hair removal, skin antisepsis, prophylactic antibiotics, peri-operative maintenance of glycaemia, and normothermia. We planned audits to evaluate our wound classification and our SSI rates based on the SAR. Our expected SSI rates in general surgery and the whole department were 2.52% and 1.70% respectively, while our observed SSI rates were 4.68% and 3.57% respectively, giving us a high outlier status with an odd's ratio of 1.72 and 2.03. Wound classifications were identified as an area of concern. For example, wound classifications were preoperatively selected based on the default wound classification of the booked procedure in the Electronic Medical Record (EMR) which led to under classifying wounds in many occasions. A total of 998 cases were reviewed, our rate of incorrect wound classification

  13. Formaldehyde exposure and acute health effects study

    SciTech Connect

    Quackenboss, J.J.; Lebowitz, M.D.; Michaud, J.P.; Bronnimann, D. )

    1989-01-01

    To assess the effects of formaldehyde exposures on health, exposure groups were defined using baseline exposure and health questionnaires. Formaldehyde concentrations were poorly correlated with these exposure classifications, perhaps due to the time delay between classification and monitoring. The 151 households reported here had a mean HCHO concentration of 35 (S.E. 1.5 and median 30) {mu}g/m{sup 3}. Passive samplers prepared in our lab were calibrated in a chamber to derive an estimated sampling rate of 0.311 {mu}g/(mg {center dot} m{sup {minus}3} {center dot} hr). They were also compared to commercially available samplers inside of the homes, with a correlation coefficient of 0.896 and mean difference of 2.6 {mu}g/m{sup 3}. In this report of initial findings from an ongoing study, daily symptoms and peak expiratory flow measurements were compared with an HCHO exposure classification based on the median measured concentrations. None of the symptoms groups were related to HCHO exposure when controlling for age and sex. There was a significant relationship between HCHO exposure and variability in peak expiratory flows that was dependent on age group. It may be especially important to assess the variability in reactive individuals and children to determine the short-term effects of HCHO exposures and possible long-term consequences.

  14. RELIABILITY OF BIOMARKERS OF PESTICIDE EXPOSURE AMONG CHILDREN AND ADULTS IN CTEPP OHIO

    EPA Science Inventory

    Urinary biomarkers offer the potential for providing an efficient tool for exposure classification by reflecting the aggregate of all exposure routes. Substantial variability observed in urinary pesticide metabolite concentrations over short periods of time, however, has cast so...

  15. Delaware Alternative Classifications

    ERIC Educational Resources Information Center

    Miller, Jay

    1975-01-01

    This article discusses the species designation and taxonomies of Delaware and Algonkian and presents eight classifications of taxa by form, habitat, color, movement, sound, use, relationship, and appearance. Relevant research is also reviewed. (CLK)

  16. Supernova Photometric Lightcurve Classification

    NASA Astrophysics Data System (ADS)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  17. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  18. ANALYSIS OF DISCRIMINATING FACTORS IN HUMAN ACTIVITIES THAT AFFECT EXPOSURE

    EPA Science Inventory

    Accurately modeling exposure to particulate matter (PM) and other pollutants ultimately involves the utilization of human location-activity databases to assist in understanding the potential variability of microenvironmental exposures. This paper critically considers and stati...

  19. MODELING POPULATION EXPOSURES TO OUTDOOR SOURCES OF HAZARDOUS AIR POLLUTANTS

    EPA Science Inventory

    Accurate assessment of human exposures is an important part of environmental health effects research. However, most air pollution epidemiology studies rely upon imperfect surrogates of personal exposures, such as information based on available central-site outdoor concentration ...

  20. A comparison of material classification techniques for ultrasound inverse imaging.

    PubMed

    Zhang, Xiaodong; Broschat, Shira L; Flynn, Patrick J

    2002-01-01

    The conjugate gradient method with edge preserving regularization (CGEP) is applied to the ultrasound inverse scattering problem for the early detection of breast tumors. To accelerate image reconstruction, several different pattern classification schemes are introduced into the CGEP algorithm. These classification techniques are compared for a full-sized, two-dimensional breast model. One of these techniques uses two parameters, the sound speed and attenuation, simultaneously to perform classification based on a Bayesian classifier and is called bivariate material classification (BMC). The other two techniques, presented in earlier work, are univariate material classification (UMC) and neural network (NN) classification. BMC is an extension of UMC, the latter using attenuation alone to perform classification, and NN classification uses a neural network. Both noiseless and noisy cases are considered. For the noiseless case, numerical simulations show that the CGEP-BMC method requires 40% fewer iterations than the CGEP method, and the CGEP-NN method requires 55% fewer. The CGEP-BMC and CGEP-NN methods yield more accurate reconstructions than the CGEP method. A quantitative comparison of the CGEP-BMC, CGEP-NN, and GN-UMC methods shows that the CGEP-BMC and CGEP-NN methods are more robust to noise than the GN-UMC method, while all three are similar in computational complexity. PMID:11831821

  1. A comparison of material classification techniques for ultrasound inverse imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Broschat, Shira L.; Flynn, Patrick J.

    2002-01-01

    The conjugate gradient method with edge preserving regularization (CGEP) is applied to the ultrasound inverse scattering problem for the early detection of breast tumors. To accelerate image reconstruction, several different pattern classification schemes are introduced into the CGEP algorithm. These classification techniques are compared for a full-sized, two-dimensional breast model. One of these techniques uses two parameters, the sound speed and attenuation, simultaneously to perform classification based on a Bayesian classifier and is called bivariate material classification (BMC). The other two techniques, presented in earlier work, are univariate material classification (UMC) and neural network (NN) classification. BMC is an extension of UMC, the latter using attenuation alone to perform classification, and NN classification uses a neural network. Both noiseless and noisy cases are considered. For the noiseless case, numerical simulations show that the CGEP-BMC method requires 40% fewer iterations than the CGEP method, and the CGEP-NN method requires 55% fewer. The CGEP-BMC and CGEP-NN methods yield more accurate reconstructions than the CGEP method. A quantitative comparison of the CGEP-BMC, CGEP-NN, and GN-UMC methods shows that the CGEP-BMC and CGEP-NN methods are more robust to noise than the GN-UMC method, while all three are similar in computational complexity.

  2. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection. PMID:26353275

  3. Accurate indel prediction using paired-end short reads

    PubMed Central

    2013-01-01

    Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375

  4. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    PubMed Central

    Zhang, Wenbin; Wang, Qi; Suo, Chunguang

    2008-01-01

    This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle's speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs) were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one) are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves the results of a single sensor data, which is trained on the

  5. Manual classification strategies in the ECOD database

    PubMed Central

    Cheng, Hua; Liao, Yuxing; Schaeffer, R. Dustin; Grishin, Nick V.

    2015-01-01

    ECOD (Evolutionary Classification Of protein Domains) is a comprehensive and up-to-date protein structure classification database. The majority of new structures released from the PDB (Protein Data Bank) every week already have close homologs in the ECOD hierarchy and thus can be reliably partitioned into domains and classified by software without manual intervention. However, those proteins that lack confidently detectable homologs require careful analysis by experts. Although many bioinformatics resources rely on expert curation to some degree, specific examples of how this curation occurs and in what cases it is necessary are not always described. Here, we illustrate the manual classification strategy in ECOD by example, focusing on two major issues in protein classification: domain partitioning and the relationship between homology and similarity scores. Most examples show recently released and manually classified PDB structures. We discuss multi-domain proteins, discordance between sequence and structural similarities, difficulties with assessing homology with scores, and integral membrane proteins homologous to soluble proteins. By timely assimilation of newly available structures into its hierarchy, ECOD strives to provide a most accurate and updated view of the protein structure world as a result of combined computational and expert-driven analysis. PMID:25917548

  6. Brain extraction based on locally linear representation-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Jiang, Jun; Wu, Yao; Zhang, Yu; Chen, Wufan; Feng, Qianjin

    2014-05-15

    Brain extraction is an important procedure in brain image analysis. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remains challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. To address this problem, we present a Locally Linear Representation-based Classification (LLRC) method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimer's Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four publicly available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX), and is comparable to the performance of BEaST, while being more accurate on some datasets compared with BEaST. PMID:24525169

  7. Concepts of Classification and Taxonomy Phylogenetic Classification

    NASA Astrophysics Data System (ADS)

    Fraix-Burnet, D.

    2016-05-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  8. Personal Exposure Monitoring Wearing Protocol Compliance: An Initial Assessment of Quantitative Measurements

    EPA Science Inventory

    Personal exposure sampling provides the most accurate and representative assessment of exposure to a pollutant, but only if measures are implemented to minimize exposure misclassification and reduce confounders that may cause misinterpretation of the collected data. Poor complian...

  9. Contribution of various microenvironments to the daily personal exposure to ultrafine particles: Personal monitoring coupled with GPS tracking

    NASA Astrophysics Data System (ADS)

    Bekö, Gabriel; Kjeldsen, Birthe Uldahl; Olsen, Yulia; Schipperijn, Jasper; Wierzbicka, Aneta; Karottki, Dorina Gabriela; Toftum, Jørn; Loft, Steffen; Clausen, Geo

    2015-06-01

    Exposure to ultrafine particles (UFP) may have adverse health effects. Central monitoring stations do not represent the personal exposure to UFP accurately. Few studies have previously focused on personal exposure to UFP. Sixty non-smoking residents living in Copenhagen, Denmark were asked to carry a backpack equipped with a portable monitor, continuously recording particle number concentrations (PN), in order to measure the real-time individual exposure over a period of ∼48 h. A GPS logger was carried along with the particle monitor and allowed us to estimate the contribution of UFP exposure occurring in various microenvironments (residence, during active and passive transport, other indoor and outdoor environments) to the total daily exposure. On average, the fractional contribution of each microenvironment to the daily integrated personal exposure roughly corresponded to the fractions of the day the subjects spent in each microenvironment. The home environment accounted for 50% of the daily personal exposure. Indoor environments other than home or vehicles contributed with ∼40%. The highest median UFP concentration was obtained during passive transport (vehicles). However, being in transit or outdoors contributed 5% or less to the daily exposure. Additionally, the subjects recorded in a diary the periods when they were at home. With this approach, 66% of the total daily exposure was attributable to the home environment. The subjects spent 28% more time at home according to the diary, compared to the GPS. These results may indicate limitations of using diaries, but also possible inaccuracy and miss-classification in the GPS data.

  10. Environmental occurrence, analysis and human exposure to the flame retardant tetrabromobisphenol-A (TBBP-A)-A review.

    PubMed

    Abou-Elwafa Abdallah, Mohamed

    2016-09-01

    TBBP-A is a high production volume chemical applied widely as a flame retardant in printed circuit boards. Recent studies have raised concern over potential harmful implications of TBBP-A exposure in human and wildlife, leading to its classification under group 2A "Probably carcinogenic to humans" by the International Agency for Research on Cancer. This article provides a comprehensive review of the available literature on TBBP-A analysis, environmental levels and human exposure. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) has been identified as the method of choice for robust, accurate and sensitive analysis of TBBP-A in different matrices. TBBP-A has been detected in almost all environmental compartments all over the world, rendering it a ubiquitous contaminant. Human exposure studies revealed dust ingestion and diet as the major pathways of TBBP-A exposure in the general population. Toddlers are likely to be more exposed than adults via accidental indoor dust ingestion. Moreover, exposure to TBBP-A may occur prenatally and via breast milk. There are no current restrictions on the production of TBBP-A in the EU or worldwide. However, more research is required to characterise human exposure to TBBP-A in and around production facilities, as well as in e-waste recycling regions. PMID:27266836

  11. Proteomic applications of automated GPCR classification.

    PubMed

    Davies, Matthew N; Gloriam, David E; Secker, Andrew; Freitas, Alex A; Mendao, Miguel; Timmis, Jon; Flower, Darren R

    2007-08-01

    The G-protein coupled receptor (GPCR) superfamily fulfils various metabolic functions and interacts with a diverse range of ligands. There is a lack of sequence similarity between the six classes that comprise the GPCR superfamily. Moreover, most novel GPCRs found have low sequence similarity to other family members which makes it difficult to infer properties from related receptors. Many different approaches have been taken towards developing efficient and accurate methods for GPCR classification, ranging from motif-based systems to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of their amino acid sequences. This review describes the inherent difficulties in developing a GPCR classification algorithm and includes techniques previously employed in this area. PMID:17639603

  12. Classification, pathogenesis, and treatment of systemic vasculitis.

    PubMed

    Griffith, M E; Gaskin, G; Pusey, C D

    1996-09-01

    Patients with systemic vasculitis (SV), especially Wegener's granulomatosis and microscopic polyangiitis, regularly present with renal involvement. Although considered a rare disease, either the incidence of SV is increasing or it is being increasingly recognized. Accurate classification systems are required to allow comparison of data from different groups investigating and treating these patients. Systemic vasculitis is known to be an autoimmune disease, but the mechanisms of pathogenesis have not been established, despite many studies on this topic in recent years. Most of this work has been done in vitro, although development of animal models is underway. Patient and renal survival have improved with aggressive immunosuppressive treatment, but morbidity is high and controversies remain in establishing the most effective regimens with minimum adverse effects. In this review we discuss the classification of SV, review the current knowledge of pathogenic mechanisms, and consider the relative merits of different treatment protocols. PMID:8903093

  13. Segmentation assisted food classification for dietary assessment

    NASA Astrophysics Data System (ADS)

    Zhu, Fengqing; Bosch, Marc; Schap, TusaRebecca; Khanna, Nitin; Ebert, David S.; Boushey, Carol J.; Delp, Edward J.

    2011-03-01

    Accurate methods and tools to assess food and nutrient intake are essential for the association between diet and health. Preliminary studies have indicated that the use of a mobile device with a built-in camera to obtain images of the food consumed may provide a less burdensome and more accurate method for dietary assessment. We are developing methods to identify food items using a single image acquired from the mobile device. Our goal is to automatically determine the regions in an image where a particular food is located (segmentation) and correctly identify the food type based on its features (classification or food labeling). Images of foods are segmented using Normalized Cuts based on intensity and color. Color and texture features are extracted from each segmented food region. Classification decisions for each segmented region are made using support vector machine methods. The segmentation of each food region is refined based on feedback from the output of classifier to provide more accurate estimation of the quantity of food consumed.

  14. Segmentation Assisted Food Classification for Dietary Assessment.

    PubMed

    Zhu, Fengqing; Bosch, Marc; Schap, Tusarebecca; Khanna, Nitin; Ebert, David S; Boushey, Carol J; Delp, Edward J

    2011-01-24

    Accurate methods and tools to assess food and nutrient intake are essential for the association between diet and health. Preliminary studies have indicated that the use of a mobile device with a built-in camera to obtain images of the food consumed may provide a less burdensome and more accurate method for dietary assessment. We are developing methods to identify food items using a single image acquired from the mobile device. Our goal is to automatically determine the regions in an image where a particular food is located (segmentation) and correctly identify the food type based on its features (classification or food labeling). Images of foods are segmented using Normalized Cuts based on intensity and color. Color and texture features are extracted from each segmented food region. Classification decisions for each segmented region are made using support vector machine methods. The segmentation of each food region is refined based on feedback from the output of classifier to provide more accurate estimation of the quantity of food consumed. PMID:22128304

  15. Implementing the North American Industry Classification System at BLS.

    ERIC Educational Resources Information Center

    Walker, James A.; Murphy, John B.

    2001-01-01

    The United States, Canada, and Mexico developed the North American Industry Classification System, which captures new and emerging industries, uses a unified concept to define industries, and is a consistent and comparable tool for measuring the nations' economies. Despite initial conversion difficulties, the new system will be a more accurate way…

  16. Preprocessing remotely-sensed data for efficient analysis and classification

    SciTech Connect

    Kelly, P.M.; White, J.M.

    1993-02-01

    Interpreting remotely-sensed data typically requires expensive, specialized computing machinery capable of storing and manipulating large amounts of data quickly. In this paper, we present a method for accurately analyzing and categorizing remotely-sensed data on much smaller, less expensive platforms. Data size is reduced in such a way an efficient, interactive method of data classification.

  17. Spatial Pattern Classification for More Accurate Forecasting of Variable Energy Resources

    NASA Astrophysics Data System (ADS)

    Novakovskaia, E.; Hayes, C.; Collier, C.

    2014-12-01

    The accuracy of solar and wind forecasts is becoming increasingly essential as grid operators continue to integrate additional renewable generation onto the electric grid. Forecast errors affect rate payers, grid operators, wind and solar plant maintenance crews and energy traders through increases in prices, project down time or lost revenue. While extensive and beneficial efforts were undertaken in recent years to improve physical weather models for a broad spectrum of applications these improvements have generally not been sufficient to meet the accuracy demands of system planners. For renewables, these models are often used in conjunction with additional statistical models utilizing both meteorological observations and the power generation data. Forecast accuracy can be dependent on specific weather regimes for a given location. To account for these dependencies it is important that parameterizations used in statistical models change as the regime changes. An automated tool, based on an artificial neural network model, has been developed to identify different weather regimes as they impact power output forecast accuracy at wind or solar farms. In this study, improvements in forecast accuracy were analyzed for varying time horizons for wind farms and utility-scale PV plants located in different geographical regions.

  18. Using minimum DNA marker loci for accurate population classification in rice (Oryza sativa L.)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Using few DNA markers to classify genetic background of a germplasm pool will help breeders make a quick decision while saving time and resources. WHICHLOCI is a computer program that selects the best combination of loci for population assignment through empiric analysis of molecular marker data. Th...

  19. Application of fuzzy classification in modern primary dental care.

    PubMed

    Veryha, Yauheni; Adamczyk, Katarzyna

    2005-01-01

    This paper describes a framework for implementing fuzzy classifications in primary dental care services. Dental practices aim to provide the highest quality services for their patients. To achieve this, it is important that dentists are able to obtain patients' opinions about their experiences in the dental practice and are able to accurately evaluate this. We propose the use of fuzzy classification to combine various assessment criteria into one general measure to assess patients' satisfaction with primary dental care services. The proposed framework can be used in conventional dental practice information systems and easily integrated with those already used. The benefits of using the proposed fuzzy classification approach include more flexible and accurate analysis of patients' feedback, combining verbal and numeric data. To confirm our theory, a prototype was developed based on the Microsoft SQL Server database management system for two criteria used in dental practices, namely making an appointment with a dentist and waiting time for dental care services. PMID:15949172

  20. Convolutional Neural Networks for patient-specific ECG classification.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-08-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB). PMID:26736826

  1. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  2. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  3. Vertebral fracture classification

    NASA Astrophysics Data System (ADS)

    de Bruijne, Marleen; Pettersen, Paola C.; Tankó, László B.; Nielsen, Mads

    2007-03-01

    A novel method for classification and quantification of vertebral fractures from X-ray images is presented. Using pairwise conditional shape models trained on a set of healthy spines, the most likely unfractured shape is estimated for each of the vertebrae in the image. The difference between the true shape and the reconstructed normal shape is an indicator for the shape abnormality. A statistical classification scheme with the two shapes as features is applied to detect, classify, and grade various types of deformities. In contrast with the current (semi-)quantitative grading strategies this method takes the full shape into account, it uses a patient-specific reference by combining population-based information on biological variation in vertebra shape and vertebra interrelations, and it provides a continuous measure of deformity. Good agreement with manual classification and grading is demonstrated on 204 lateral spine radiographs with in total 89 fractures.

  4. Commission 45: Spectral Classification

    NASA Astrophysics Data System (ADS)

    Giridhar, Sunetra; Gray, Richard O.; Corbally, Christopher J.; Bailer-Jones, Coryn A. L.; Eyer, Laurent; Irwin, Michael J.; Kirkpatrick, J. Davy; Majewski, Steven; Minniti, Dante; Nordström, Birgitta

    This report gives an update of developments (since the last General Assembly at Prague) in the areas that are of relevance to the commission. In addition to numerous papers, a new monograph entitled Stellar Spectral Classification with Richard Gray and Chris Corbally as leading authors will be published by Princeton University Press as part of their Princeton Series in Astrophysics in April 2009. This book is an up-to-date and encyclopedic review of stellar spectral classification across the H-R diagram, including the traditional MK system in the blue-violet, recent extensions into the ultraviolet and infrared, the newly defined L-type and T-type spectral classes, as well as spectral classification of carbon stars, S-type stars, white dwarfs, novae, supernovae and Wolf-Rayet stars.

  5. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  6. Classification images with uncertainty

    PubMed Central

    Tjan, Bosco S.; Nandy, Anirvan S.

    2009-01-01

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a “haze” that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms. PMID:16889477

  7. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  8. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  9. Improvement of the classification system for wheelchair rugby: athlete priorities.

    PubMed

    Altmann, Viola C; Hart, Anne L; van Limbeek, Jacques; Vanlandewijck, Yves C

    2014-10-01

    A representative sample (N=302) of the wheelchair rugby population responded to a survey about the classification system based on prioritized items by International Wheelchair Rugby Federation members. Respondents stated, "The classification system is accurate but needs adjustments" (56%), "Any athlete with tetraequivalent impairment should be allowed to compete" (72%), "Athletes with cerebral palsy and other coordination impairments should be classified with a system different than the current one" (75%), and "The maximal value for trunk should be increased from 1.0 to 1.5" (67%). A minority stated, "Wheelchair rugby should only be open to spinal cord injury and other neurological conditions" (36%) and "There should be a 4.0 class" (33%). Results strongly indicated that athletes and stakeholders want adjustments to the classification system in two areas: a focus on evaluation of athletes with impairments other than loss of muscle power caused by spinal cord injury and changes in classification of trunk impairment. PMID:25211483

  10. Trichromatic opponent color classification.

    PubMed

    Chichilnisky, E J; Wandell, B A

    1999-10-01

    Stimuli varying in intensity and chromaticity, presented on numerous backgrounds, were classified into red/green, blue/yellow and white/black opponent color categories. These measurements revealed the shapes of the boundaries that separate opponent colors in three-dimensional color space. Opponent color classification boundaries were generally not planar, but their shapes could be summarized by a piecewise linear model in which increment and decrement color signals are combined with different weights at two stages to produce opponent color sensations. The effect of background light on classification was largely explained by separate gain changes in increment and decrement cone signals. PMID:10615508

  11. Classification of syringomyelia.

    PubMed

    Milhorat, T H

    2000-01-01

    Syringomyelia poses special challenges for the clinician because of its complex symptomatology, uncertain pathogenesis, and multiple options of treatment. The purpose of this study was to classify intramedullary cavities according to their most salient pathological and clinical features. Pathological findings obtained in 175 individuals with tubular cavitations of the spinal cord were correlated with clinical and magnetic resonance (MR) imaging findings in a database of 927 patients. A classification system was developed in which the morbid anatomy, cause, and pathogenesis of these lesions are emphasized. The use of a disease-based classification of syringomyelia facilitates diagnosis and the interpretation of MR imaging findings and provides a guide to treatment. PMID:16676921

  12. Pattern classification and reconstruction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Wei

    In this dissertation, novel techniques for hyperspectral classification and signal reconstruction from random projections are presented. A classification paradigm designed to exploit the rich statistical structure of hyperspectral data is proposed. The proposed framework employs the local Fisher's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, followed by a subsequent Gaussian-mixture-model or support-vector-machine classifier. An extension of this framework in a kernel induced space is also studied. This classification approach employs a maximum likelihood classifier and dimensionality reduction based on a kernel local Fisher's discriminant analysis. The technique imposes an additional constraint on the kernel mapping---it ensures that neighboring points in the input space stay close-by in the projected subspace. In a typical remote sensing flow, the sender needs to invoke an appropriate compression strategy for downlinking signals (e.g., imagery to a base station). Signal acquisition using random projections significantly decreases the sender-side computational cost, while preserving useful information. In this dissertation, a novel class-dependent hyperspectral image reconstruction strategy is also proposed. The proposed method employs statistics pertinent to each class as opposed to the average statistics estimated over the entire dataset, resulting in a more accurate reconstruction from random projections. An integrated spectral-spatial model for signal reconstruction from random projections is also developed. In this approach, spatially homogeneous segments are combined with spectral pixel-wise classification results in the projected subspace. An appropriate reconstruction strategy, such as compressive projection principal component analysis (CPPCA), is employed individually in each category based on this integrated map. The proposed method provides better reconstruction performance as compared to

  13. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  14. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  15. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  16. Soil Classification and Treatment.

    ERIC Educational Resources Information Center

    Clemson Univ., SC. Vocational Education Media Center.

    This instructional unit was designed to enable students, primarily at the secondary level, to (1) classify soils according to current capability classifications of the Soil Conservation Service, (2) select treatments needed for a given soil class according to current recommendations provided by the Soil Conservation Service, and (3) interpret a…

  17. The Biglan Classification Revisited.

    ERIC Educational Resources Information Center

    Stoecker, Judith L.

    This study replicated previous research that tested the validity of A. Biglan's classification scheme, a theoretical framework for empirically examining the differences among academic disciplines and classifying them according to three dimensions (hard-soft, pure-applied, life-nonlife). In addition, new data were used to attempt continued…

  18. The Classification Conundrum.

    ERIC Educational Resources Information Center

    Granger, Charles R.

    1983-01-01

    Argues against the five-kingdom scheme of classification as using inconsistent criteria, ending up with divisions that are forced, not natural. Advocates an approach using cell type/complexity and modification of the metabolic machinery, recommending the five-kingdom scheme as starting point for class discussion on taxonomy and its conceptual…

  19. Homographs: Classification and Identification.

    ERIC Educational Resources Information Center

    Pacak, M.; Henisz, Bozena

    1968-01-01

    Homographs are defined in this study as sets of word forms which are spelled alike but which have entirely or partially different meanings and which may have different syntactic functions (that is, they belong to more than one form class or to more than one subclass of a form class). This report deals with the classification and identification of…

  20. Equivalent Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    Rupp and Templin (2008) do a good job at describing the ever expanding landscape of Diagnostic Classification Models (DCM). In many ways, their review article clearly points to some of the questions that need to be answered before DCMs can become part of the psychometric practitioners toolkit. Apart from the issues mentioned in this article that…

  1. Improving Student Question Classification

    ERIC Educational Resources Information Center

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  2. Revisiting Classification and Identification

    ERIC Educational Resources Information Center

    Keogh, Barbara K.

    2005-01-01

    This article discusses issues of classification and identification, while evaluating some of the continuing controversies about learning disabilities (LD) related to those topics. The author suggests that many problems have to do with confusion between the two. Despite years of effort and an extraordinary increase in the number of individuals…

  3. Shark Teeth Classification

    ERIC Educational Resources Information Center

    Brown, Tom; Creel, Sally; Lee, Velda

    2009-01-01

    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  4. Refining Time-Activity Classification of Human Subjects Using the Global Positioning System

    PubMed Central

    Hu, Maogui; Li, Wei; Li, Lianfa; Houston, Douglas; Wu, Jun

    2016-01-01

    Background Detailed spatial location information is important in accurately estimating personal exposure to air pollution. Global Position System (GPS) has been widely used in tracking personal paths and activities. Previous researchers have developed time-activity classification models based on GPS data, most of them were developed for specific regions. An adaptive model for time-location classification can be widely applied to air pollution studies that use GPS to track individual level time-activity patterns. Methods Time-activity data were collected for seven days using GPS loggers and accelerometers from thirteen adult participants from Southern California under free living conditions. We developed an automated model based on random forests to classify major time-activity patterns (i.e. indoor, outdoor-static, outdoor-walking, and in-vehicle travel). Sensitivity analysis was conducted to examine the contribution of the accelerometer data and the supplemental spatial data (i.e. roadway and tax parcel data) to the accuracy of time-activity classification. Our model was evaluated using both leave-one-fold-out and leave-one-subject-out methods. Results Maximum speeds in averaging time intervals of 7 and 5 minutes, and distance to primary highways with limited access were found to be the three most important variables in the classification model. Leave-one-fold-out cross-validation showed an overall accuracy of 99.71%. Sensitivities varied from 84.62% (outdoor walking) to 99.90% (indoor). Specificities varied from 96.33% (indoor) to 99.98% (outdoor static). The exclusion of accelerometer and ambient light sensor variables caused a slight loss in sensitivity for outdoor walking, but little loss in overall accuracy. However, leave-one-subject-out cross-validation showed considerable loss in sensitivity for outdoor static and outdoor walking conditions. Conclusions The random forests classification model can achieve high accuracy for the four major time

  5. [Classification of primary bone tumors].

    PubMed

    Dominok, G W; Frege, J

    1986-01-01

    An expanded classification for bone tumors is presented based on the well known international classification as well as earlier systems. The current status and future trends in this area are discussed. PMID:3461626

  6. ECOLOGICAL CLASSIFICATION ISSUES AND RESEARCH

    EPA Science Inventory

    Purposes and need for ecological classification will be discussed. Some brief examples will highlight regional and watershed approaches, general issues of extrapolation and classification helpful to monitoring/assessment design for larger aquatic systems, and some specific concep...

  7. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  8. Free classification of American English dialects by native and non-native listeners

    PubMed Central

    Clopper, Cynthia G.; Bradlow, Ann R.

    2009-01-01

    Most second language acquisition research focuses on linguistic structures, and less research has examined the acquisition of sociolinguistic patterns. The current study explored the perceptual classification of regional dialects of American English by native and non-native listeners using a free classification task. Results revealed similar classification strategies for the native and non-native listeners. However, the native listeners were more accurate overall than the non-native listeners. In addition, the non-native listeners were less able to make use of constellations of cues to accurately classify the talkers by dialect. However, the non-native listeners were able to attend to cues that were either phonologically or sociolinguistically relevant in their native language. These results suggest that non-native listeners can use information in the speech signal to classify talkers by regional dialect, but that their lack of signal-independent cultural knowledge about variation in the second language leads to less accurate classification performance. PMID:20161400

  9. Comparison of Cramer classification between Toxtree, the OECD QSAR Toolbox and expert judgment.

    PubMed

    Bhatia, Sneha; Schultz, Terry; Roberts, David; Shen, Jie; Kromidas, Lambros; Marie Api, Anne

    2015-02-01

    The Threshold of Toxicological Concern (TTC) is a pragmatic approach in risk assessment. In the absence of data, it sets up levels of human exposure that are considered to have no appreciable risk to human health. The Cramer decision tree is used extensively to determine these exposure thresholds by categorizing non-carcinogenic chemicals into three different structural classes. Therefore, assigning an accurate Cramer class to a material is a crucial step to preserve the integrity of the risk assessment. In this study the Cramer class of over 1000 fragrance materials across diverse chemical classes were determined by using Toxtree (TT), the OECD QSAR Toolbox (TB), and expert judgment. Disconcordance was observed between TT and the TB. A total of 165 materials (16%) showed different results from the two programs. The overall concordance for Cramer classification between TT and expert judgment is 83%, while the concordance between the TB and expert judgment is 77%. Amines, lactones and heterocycles have the lowest percent agreement with expert judgment for TT and the TB. For amines, the expert judgment agreement is 45% for TT and 55% for the TB. For heterocycles, the expert judgment agreement is 55% for TT and the TB. For lactones, the expert judgment agreement is 56% for TT and 50% for the TB. Additional analyses were conducted to determine the concordance within various chemical classes. Critical checkpoints in the decision tree are identified. Strategies and guidance on determining the Cramer class for various chemical classes are discussed. PMID:25460032

  10. Anatomical Brain Images Alone Can Accurately Diagnose Chronic Neuropsychiatric Illnesses

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S.

    2012-01-01

    Objective Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. Methods We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. Results In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Conclusions Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these

  11. Scalable metagenomic taxonomy classification using a reference genome database

    PubMed Central

    Ames, Sasha K.; Hysom, David A.; Gardner, Shea N.; Lloyd, G. Scott; Gokhale, Maya B.; Allen, Jonathan E.

    2013-01-01

    Motivation: Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. Results: A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take <20 h on a single node 40 core large memory machine and provide new insights on the metagenomic contents of the sample. Availability: Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat Contact: allen99@llnl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23828782

  12. Rockfall exposures in Montserrat mountain

    NASA Astrophysics Data System (ADS)

    Fontquerni Gorchs, Sara; Vilaplana Fernández, Joan Manuel; Guinau Sellés, Marta; Jesús Royán Cordero, Manuel

    2015-04-01

    This study shows the developed methodology to analyze the exposure level on a 1:25000 scale, and the results obtained by applying it to an important part of the Monataña de Montserrat Natural Park for vehicles with and without considering their occupants. The development of this proposal is part of an ongoing study which focuses more in-depth in the analysis of the rockfall risk exposure in different scales and in different natural and social contexts. This research project applies a methodology to evaluate the rockfall exposure level based on the product of the frequency of occurrence of the event by an exposure function of the vulnerable level on a 1:25,000 scale although the scale used for the study was 1:10,000. The proposed methodology to calculate the exposure level is based on six phases: 1- Identification, classification and inventory of every element potentially under risk. 2- Zoning of the frequency of occurrence of the event in the studied area. 3- Design of the exposure function for each studied element. 4- Obtaining the Exposure index, it can be defined as the product of the frequency of occurrence by the exposure function of the vulnerable element through SIG analysis obtained with ArcGis software (ESRI) 5- Obtaining exposure level by grouping into categories the numerical values of the exposure index. 6- Production of the exposition zoning map. The different types of vulnerable elements considered in the totality of the study are: Vehicles in motion, people in vehicles in motion, people on paths, permanent elements and people in buildings. Each defined typology contains all elements with same characteristics and an exposure function has been designed for each of them. For the exposure calculation, two groups of elements have been considered; firstly the group of elements with no people involved and afterwards same group of elements but with people involved. This is a first comprehensive and synthetic work about rockfall exposure on the Montserrat

  13. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  14. Automated Defect Classification (ADC)

    1998-01-01

    The ADC Software System is designed to provide semiconductor defect feature analysis and defect classification capabilities. Defect classification is an important software method used by semiconductor wafer manufacturers to automate the analysis of defect data collected by a wide range of microscopy techniques in semiconductor wafer manufacturing today. These microscopies (e.g., optical bright and dark field, scanning electron microscopy, atomic force microscopy, etc.) generate images of anomalies that are induced or otherwise appear on wafermore » surfaces as a result of errant manufacturing processes or simple atmospheric contamination (e.g., airborne particles). This software provides methods for analyzing these images, extracting statistical features from the anomalous regions, and applying supervised classifiers to label the anomalies into user-defined categories.« less

  15. Tree Classification Software

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1993-01-01

    This paper introduces the IND Tree Package to prospective users. IND does supervised learning using classification trees. This learning task is a basic tool used in the development of diagnosis, monitoring and expert systems. The IND Tree Package was developed as part of a NASA project to semi-automate the development of data analysis and modelling algorithms using artificial intelligence techniques. The IND Tree Package integrates features from CART and C4 with newer Bayesian and minimum encoding methods for growing classification trees and graphs. The IND Tree Package also provides an experimental control suite on top. The newer features give improved probability estimates often required in diagnostic and screening tasks. The package comes with a manual, Unix 'man' entries, and a guide to tree methods and research. The IND Tree Package is implemented in C under Unix and was beta-tested at university and commercial research laboratories in the United States.

  16. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  17. Granular loess classification based

    SciTech Connect

    Browzin, B.S.

    1985-05-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess.

  18. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  19. Histologic classification of gliomas.

    PubMed

    Perry, Arie; Wesseling, Pieter

    2016-01-01

    Gliomas form a heterogeneous group of tumors of the central nervous system (CNS) and are traditionally classified based on histologic type and malignancy grade. Most gliomas, the diffuse gliomas, show extensive infiltration in the CNS parenchyma. Diffuse gliomas can be further typed as astrocytic, oligodendroglial, or rare mixed oligodendroglial-astrocytic of World Health Organization (WHO) grade II (low grade), III (anaplastic), or IV (glioblastoma). Other gliomas generally have a more circumscribed growth pattern, with pilocytic astrocytomas (WHO grade I) and ependymal tumors (WHO grade I, II, or III) as the most frequent representatives. This chapter provides an overview of the histology of all glial neoplasms listed in the WHO 2016 classification, including the less frequent "nondiffuse" gliomas and mixed neuronal-glial tumors. For multiple decades the histologic diagnosis of these tumors formed a useful basis for assessment of prognosis and therapeutic management. However, it is now fully clear that information on the molecular underpinnings often allows for a more robust classification of (glial) neoplasms. Indeed, in the WHO 2016 classification, histologic and molecular findings are integrated in the definition of several gliomas. As such, this chapter and Chapter 6 are highly interrelated and neither should be considered in isolation. PMID:26948349

  20. Classification of mental disorders*

    PubMed Central

    Stengel, E.

    1959-01-01

    One of the fundamental difficulties in devising a classification of mental disorders is the lack of agreement among psychiatrists regarding the concepts upon which it should be based: diagnoses can rarely be verified objectively and the same or similar conditions are described under a confusing variety of names. This situation militates against the ready exchange of ideas and experiences and hampers progress. As a first step towards remedying this state of affairs, the author of the article below has undertaken a critical survey of existing classifications. He shows how some of the difficulties created by lack of knowledge regarding pathology and etiology may be overcome by the use of “operational definitions” and outlines the basic principles on which he believes a generally acceptable international classification might be constructed. If this can be done it should lead to a greater measure of agreement regarding the value of specific treatments for mental disorders and greatly facilitate a broad epidemiological approach to psychiatric research. PMID:13834299

  1. Seismic event classification system

    DOEpatents

    Dowla, F.U.; Jarpe, S.P.; Maurer, W.

    1994-12-13

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities. 21 figures.

  2. Seismic event classification system

    DOEpatents

    Dowla, Farid U.; Jarpe, Stephen P.; Maurer, William

    1994-01-01

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities.

  3. ASSESSING EXPOSURE CLASSIFICATION IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    The Agricultural Health Study (AHS) is a prospective epidemiologic study examining cancer and non-cancer health outcomes for over 55,000 pesticide applicators and 34,000 spouses in Iowa and North Carolina. Questionnaires were used to collect information about the use of specific ...

  4. Criminal exposure.

    PubMed

    1999-08-01

    A 39-year-old man who had sex with a 16-year-old boy, was sentenced to five years in prison. The defendant pleaded guilty to statutory rape and criminal exposure to HIV. The boy discovered that the man was taking HIV medications, and the man subsequently disclosed his treatment after being arrested. PMID:11367003

  5. EXPOSURE ANALYSIS

    EPA Science Inventory

    This proceedings chapter will discuss the state-of-the-science regarding the evaluation of exposure as it relates to water quality criteria (WQC), sediment quality guidelines (SQG), and wildlife criteria (WC). Throughout this discussion, attempts are made to identify the methods ...

  6. Hypotonic exposures.

    PubMed

    Flynn, W J; Hill, R M

    1984-03-01

    Even without a contact lens, the cornea can suffer adverse physiological changes from hypotonic exposure, as well as the associated subjective phenomena (e.g., halo and rainbows). The contact lens adds a dimension to this problem that should be viewed against a background of normal (non-wearing) susceptibilities. PMID:6715776

  7. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  8. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  9. Post-Mortem evaluation of amyloid-dopamine terminal positron emission tomography dementia classifications.

    PubMed

    Albin, Roger L; Fisher-Hubbard, Amanda; Shanmugasundaram, Krithika; Koeppe, Robert A; Burke, James F; Camelo-Piragua, Sandra; Lieberman, Andrew P; Giordani, Bruno; Frey, Kirk A

    2015-11-01

    Clinical classification of early dementia and mild cognitive impairment (MCI) is imprecise. We reported previously that molecular imaging classification of early dementia and MCI with dual amyloid and dopamine terminal positron emission tomography differs significantly from expert clinical classification. We now report pathological diagnoses in a substantial subset of our previously imaged subjects. Among 36 subjects coming to autopsy, imaging classifications and pathological diagnosis were concordant in 33 cases (κ = 0.85). This approach enhanced specificity of Alzheimer's disease diagnosis. The strong concordance of imaging-based classifications and pathological diagnoses suggests that this imaging approach will be useful in establishing more accurate and convenient classification biomarkers for dementia research. PMID:26183692

  10. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  11. Assessing exposure in epidemiologic studies to disinfection by-products in drinking water: report from an international workshop.

    PubMed Central

    Arbuckle, Tye E; Hrudey, Steve E; Krasner, Stuart W; Nuckols, Jay R; Richardson, Susan D; Singer, Philip; Mendola, Pauline; Dodds, Linda; Weisel, Clifford; Ashley, David L; Froese, Kenneth L; Pegram, Rex A; Schultz, Irvin R; Reif, John; Bachand, Annette M; Benoit, Frank M; Lynberg, Michele; Poole, Charles; Waller, Kirsten

    2002-01-01

    The inability to accurately assess exposure has been one of the major shortcomings of epidemiologic studies of disinfection by-products (DBPs) in drinking water. A number of contributing factors include a) limited information on the identity, occurrence, toxicity, and pharmacokinetics of the many DBPs that can be formed from chlorine, chloramine, ozone, and chlorine dioxide disinfection; b) the complex chemical interrelationships between DBPs and other parameters within a municipal water distribution system; and c) difficulties obtaining accurate and reliable information on personal activity and water consumption patterns. In May 2000, an international workshop was held to bring together various disciplines to develop better approaches for measuring DBP exposure for epidemiologic studies. The workshop reached consensus about the clear need to involve relevant disciplines (e.g., chemists, engineers, toxicologists, biostatisticians and epidemiologists) as partners in developing epidemiologic studies of DBPs in drinking water. The workshop concluded that greater collaboration of epidemiologists with water utilities and regulators should be encouraged in order to make regulatory monitoring data more useful for epidemiologic studies. Similarly, exposure classification categories in epidemiologic studies should be chosen to make results useful for regulatory or policy decision making. PMID:11834463

  12. [Guidelines for hygienic classification of learning technologies].

    PubMed

    Kuchma, V R; Teksheva, L M; Milushkina, O Iu

    2008-01-01

    Optimization of the educational environment under the present-day conditions has been in progress, by using learning techwares (LTW) without fail. To organize and regulate an academic process in terms of the safety of applied LTW, there is a need for their classification. The currently existing attempts to structure LTW disregard hygienically significant aspects. The task of the present study was to substantiate a LTW safety criterion ensuring a universal approach to working out regulations. This criterion may be the exposure intensity determined by the form of organization of education and its pattern, by the procedure of information presentation, and the age-related peculiarities of a pupil, i.e. by the actual load that is presented by the product of the intensity exposure and its time. The hygienic classification of LTW may be used to evaluate their negative effect in an educational process on the health status of children and adolescents, to regulate hazardous factors and training modes, to design and introduce new learning complexes. The structuring of a LTW system allows one to define possible deleterious actions and the possibilities of preventing this action on the basis of strictly established regulations. PMID:18592639

  13. Multispectral rock-type separation and classification.

    SciTech Connect

    Moya, Mary M.; Fogler, Robert Joseph; Paskaleva, Biliana; Hayat, Majeed M.

    2004-06-01

    This paper explores the possibility of separating and classifying remotely-sensed multispectral data from rocks and minerals onto seven geological rock-type groups. These groups are extracted from the general categories of metamorphic, igneous and sedimentary rocks. The study is performed under ideal conditions for which the data is generated according to laboratory hyperspectral data for the members, which are, in turn, passed through the Multi-spectral Thermal Imager (MTI) filters yielding 15 bands. The main challenge in separability is the small size of the training data sets, which initially did not permit direct application of Bayesian decision theory. To enable Bayseian classification, the original training data is linearly perturbed with the addition minerals, vegetation, soil, water and other valid impurities. As a result, the size of the training data is significantly increased and accurate estimates of the covariance matrices are achieved. In addition, a set of reduced (five) linearly-extracted canonical features that are optimal in providing the most important information about the data is determined. An alternative nonlinear feature-selection method is also employed based on spectral indices comprising a small subset of all possible ratios between bands. By applying three optimization strategies, combinations of two and three ratios are found that provide reliable separability and classification between all seven groups according to the Bhattacharyya distance. To set a benchmark to which the MTI capability in rock classification can be compared, an optimization strategy is performed for the selection of optimal multispectral filters, other than the MTI filters, and an improvement in classification is predicted.

  14. Poisoning by Herbs and Plants: Rapid Toxidromic Classification and Diagnosis.

    PubMed

    Diaz, James H

    2016-03-01

    The American Association of Poison Control Centers has continued to report approximately 50,000 telephone calls or 8% of incoming calls annually related to plant exposures, mostly in children. Although the frequency of plant ingestions in children is related to the presence of popular species in households, adolescents may experiment with hallucinogenic plants; and trekkers and foragers may misidentify poisonous plants as edible. Since plant exposures have continued at a constant rate, the objectives of this review were (1) to review the epidemiology of plant poisonings; and (2) to propose a rapid toxidromic classification system for highly toxic plant ingestions for field use by first responders in comparison to current classification systems. Internet search engines were queried to identify and select peer-reviewed articles on plant poisonings using the key words in order to classify plant poisonings into four specific toxidromes: cardiotoxic, neurotoxic, cytotoxic, and gastrointestinal-hepatotoxic. A simple toxidromic classification system of plant poisonings may permit rapid diagnoses of highly toxic versus less toxic and nontoxic plant ingestions both in households and outdoors; direct earlier management of potentially serious poisonings; and reduce costly inpatient evaluations for inconsequential plant ingestions. The current textbook classification schemes for plant poisonings were complex in comparison to the rapid classification system; and were based on chemical nomenclatures and pharmacological effects, and not on clearly presenting toxidromes. Validation of the rapid toxidromic classification system as compared to existing chemical classification systems for plant poisonings will require future adoption and implementation of the toxidromic system by its intended users. PMID:26948561

  15. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology.

    PubMed

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-09-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs. PMID:26517655

  16. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology

    PubMed Central

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-01-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs. PMID:26517655

  17. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  18. Classification of LiDAR Data with Point Based Classification Methods

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2016-06-01

    LiDAR is one of the most effective systems for 3 dimensional (3D) data collection in wide areas. Nowadays, airborne LiDAR data is used frequently in various applications such as object extraction, 3D modelling, change detection and revision of maps with increasing point density and accuracy. The classification of the LiDAR points is the first step of LiDAR data processing chain and should be handled in proper way since the 3D city modelling, building extraction, DEM generation, etc. applications directly use the classified point clouds. The different classification methods can be seen in recent researches and most of researches work with the gridded LiDAR point cloud. In grid based data processing of the LiDAR data, the characteristic point loss in the LiDAR point cloud especially vegetation and buildings or losing height accuracy during the interpolation stage are inevitable. In this case, the possible solution is the use of the raw point cloud data for classification to avoid data and accuracy loss in gridding process. In this study, the point based classification possibilities of the LiDAR point cloud is investigated to obtain more accurate classes. The automatic point based approaches, which are based on hierarchical rules, have been proposed to achieve ground, building and vegetation classes using the raw LiDAR point cloud data. In proposed approaches, every single LiDAR point is analyzed according to their features such as height, multi-return, etc. then automatically assigned to the class which they belong to. The use of un-gridded point cloud in proposed point based classification process helped the determination of more realistic rule sets. The detailed parameter analyses have been performed to obtain the most appropriate parameters in the rule sets to achieve accurate classes. The hierarchical rule sets were created for proposed Approach 1 (using selected spatial-based and echo-based features) and Approach 2 (using only selected spatial-based features

  19. Novel Cortical Thickness Pattern for Accurate Detection of Alzheimer's Disease.

    PubMed

    Zheng, Weihao; Yao, Zhijun; Hu, Bin; Gao, Xiang; Cai, Hanshu; Moore, Philip

    2015-01-01

    Brain network occupies an important position in representing abnormalities in Alzheimer's disease (AD) and mild cognitive impairment (MCI). Currently, most studies only focused on morphological features of regions of interest without exploring the interregional alterations. In order to investigate the potential discriminative power of a morphological network in AD diagnosis and to provide supportive evidence on the feasibility of an individual structural network study, we propose a novel approach of extracting the correlative features from magnetic resonance imaging, which consists of a two-step approach for constructing an individual thickness network with low computational complexity. Firstly, multi-distance combination is utilized for accurate evaluation of between-region dissimilarity; and then the dissimilarity is transformed to connectivity via calculation of correlation function. An evaluation of the proposed approach has been conducted with 189 normal controls, 198 MCI subjects, and 163 AD patients using machine learning techniques. Results show that the observed correlative feature suggests significant promotion in classification performance compared with cortical thickness, with accuracy of 89.88% and area of 0.9588 under receiver operating characteristic curve. We further improved the performance by integrating both thickness and apolipoprotein E ɛ4 allele information with correlative features. New achieved accuracies are 92.11% and 79.37% in separating AD from normal controls and AD converters from non-converters, respectively. Differences between using diverse distance measurements and various correlation transformation functions are also discussed to explore an optimal way for network establishment. PMID:26444768

  20. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  1. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  2. On the Classification of Psychology in General Library Classification Schemes.

    ERIC Educational Resources Information Center

    Soudek, Miluse

    1980-01-01

    Holds that traditional library classification systems are inadequate to handle psychological literature, and advocates the establishment of new theoretical approaches to bibliographic organization. (FM)

  3. Remote Sensing Classification Uncertainty: Validating Probabilistic Pixel Level Classification

    NASA Astrophysics Data System (ADS)

    Vrettas, Michail; Cornford, Dan; Bastin, Lucy; Pons, Xavier; Sevillano, Eva; Moré, Gerard; Serra, Pere; Ninyerola, Miquel

    2013-04-01

    There already exists an extensive literature on classification of remotely sensed imagery, and indeed classification more widely, that considers a wide range of probabilistic and non-probabilistic classification methodologies. Although for many probabilistic classification methodologies posterior class probabilities are produced per pixel (observation) these are often not communicated at the pixel level, and typically not validated at the pixel level. Most often the probabilistic classification in converted into a hard classification (of the most probable class) and the accuracy of the resulting classification is reported in terms of a global confusion matrix, or some score derived from this. For applications where classification accuracy is spatially variable and where pixel level estimates of uncertainty can be meaningfully exploited in workflows that propagate uncertainty validating and communicating the pixel level uncertainty opens opportunities for more refined and accountable modelling. In this work we describe our recent work applying and validation of a range of probabilistic classifiers. Using a multi-temporal Landsat data set of the Ebro Delta in Catalonia, which has been carefully radiometrically and geometrically corrected, we present a range of Bayesian classifiers from simple Bayesian linear discriminant analysis to a complex variational Gaussian process based classifier. Field study derived labelled data, classified into 8 classes, which primarily consider land use and the degree of flooding in what is a rice growing region, are used to train the pixel level classifiers. Our focus is not so much on the classification accuracy, but rather the validation of the probabilistic classification made by all methods. We present a range of validation plots and scores, many of which are used for probabilistic weather forecast verification, but are new to remote sensing classification including of course the standard measures of misclassification, but also

  4. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  5. Exposure chamber

    DOEpatents

    Moss, Owen R.; Briant, James K.

    1983-01-01

    An exposure chamber includes an imperforate casing having a fluid inlet at the top and an outlet at the bottom. A single vertical series of imperforate trays is provided. Each tray is spaced on all sides from the chamber walls. Baffles adjacent some of the trays restrict and direct the flow to give partial flow back and forth across the chambers and downward flow past the lowermost pan adjacent a central plane of the chamber.

  6. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  7. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  8. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  9. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  10. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  11. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  12. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  13. Interactive Classification Technology

    NASA Technical Reports Server (NTRS)

    deBessonet, Cary

    2000-01-01

    The investigators upgraded a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the more effective use of the technologies in automated reasoning and interactive classification systems. The overall goals of the project were: 1) the enhancement of the representation language SL to accommodate a wider range of meaning; 2) the development of a default inference scheme to operate over SL notation as it is encoded; and 3) the development of an interpreter for SL that would handle representations of some basic cognitive acts and perspectives.

  14. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  15. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  16. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  17. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  18. Evaluating LANDSAT wildland classification accuracies

    NASA Technical Reports Server (NTRS)

    Toll, D. L.

    1980-01-01

    Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.

  19. Simplified classification of spontaneous abortions.

    PubMed Central

    Rushton, D I

    1978-01-01

    A simple classification of products of conception aborted in early pregnancy is described. This classification bears a closer relation to the aetiology of the abortions and the timing of the teratological insult in those conceptuses with morphological abnormalities than have previous classifications. It is hoped it may be of value in counselling patients who abort recurrently and also in the assessment of some environmental hazards purported to cause early pregnancy wastage and congenital malformations. Images PMID:564967

  20. Classification of brain tumors using MRI and MRS data

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Liacouras, Eirini Karamani; Miranda, Erickson; Kanamalla, Uday S.; Megalooikonomou, Vasileios

    2007-03-01

    We study the problem of classifying brain tumors as benign or malignant using information from magnetic resonance (MR) imaging and magnetic resonance spectroscopy (MRS) to assist in clinical diagnosis. The proposed approach consists of several steps including segmentation, feature extraction, feature selection, and classification model construction. Using an automated segmentation technique based on fuzzy connectedness we accurately outline the tumor mass boundaries in the MR images so that further analysis concentrates on these regions of interest (ROIs). We then apply a concentric circle technique on the ROIs to extract features that are utilized by the classification algorithms. To remove redundant features, we perform feature selection where only those features with discriminatory information (among classes) are used in the model building process. The involvement of MRS features further improves the classification accuracy of the model. Experimental results demonstrate the effectiveness of the proposed approach in classifying brain tumors in MR images.

  1. Robust tissue classification for reproducible wound assessment in telemedicine environments

    NASA Astrophysics Data System (ADS)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  2. Overview of Classification Systems in Peripheral Artery Disease

    PubMed Central

    Hardman, Rulon L.; Jazaeri, Omid; Yi, J.; Smith, M.; Gupta, Rajan

    2014-01-01

    Peripheral artery disease (PAD), secondary to atherosclerotic disease, is currently the leading cause of morbidity and mortality in the western world. While PAD is common, it is estimated that the majority of patients with PAD are undiagnosed and undertreated. The challenge to the treatment of PAD is to accurately diagnose the symptoms and determine treatment for each patient. The varied presentations of peripheral vascular disease have led to numerous classification schemes throughout the literature. Consistent grading of patients leads to both objective criteria for treating patients and a baseline for clinical follow-up. Reproducible classification systems are also important in clinical trials and when comparing medical, surgical, and endovascular treatment paradigms. This article reviews the various classification systems for PAD and advantages to each system. PMID:25435665

  3. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    SciTech Connect

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the

  4. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  5. Fast and accurate deflectometry with crossed fringes

    NASA Astrophysics Data System (ADS)

    Liu, Yuankun; Olesch, Evelyn; Yang, Zheng; Häusler, Gerd

    2014-08-01

    Phase measuring deflectometry (PMD) acquires the two components of the local surface gradient via a sequence of two orthogonal sinusoidal fringe patterns that have to be displayed and captured separately. We will demonstrate that the sequential process (different fringe directions, phase shifting) can be completely avoided by using a cross fringe pattern. With an optimized Fourier evaluation, high quality data of smooth optical surfaces can be acquired within one single shot. The cross fringe pattern allows for one more improvement of PMD: we will demonstrate a novel phase-shift technique, where a one-dimensional N-phase shift allows for the acquisition of the two orthogonal phases, with only N exposures instead of 2N exposures. Therefore, PMD can be implemented by a one-dimensional translation of the fringe pattern, instead of the common two-dimensional translation, which is quite useful for certain applications.

  6. Classification of Liver Trauma

    PubMed Central

    Rizoli, Sandro B.; Brenneman, Frederick D.; Hanna, Sherif S.; Kahnamoui, Kamyar

    1996-01-01

    The classification of liver injuries is important for clinical practice, clinical research and quality assurance activities. The Organ Injury Scaling (OIS) Committee of the American Association for the Surgery of Trauma proposed the OIS for liver trauma in 1989. The purpose ofthe present study was to apply this scale to a cohort ofliver trauma patients managed at a single Canadian trauma centre from January 1987 to June 1992.170 study patients were identified and reviewed. The mean age was 30, with 69% male and a mean ISS of 33.90% had a blunt mechanism ofinjury. The 170 patients were categorized into the 60IS grades ofliver injury. The number of units of blood transfused, the magnitude of the operative treatment required, the liver-related complications and the liver-related mortality correlated well with the OIS grade. The OIS grade was unable to predict the need for laparotomy or the length of stay in hospital. We conclude that the OIS is a useful, practical and important tool for the categorization of liver injuries, and it may prove to be the universally accepted classification scheme in liver trauma. PMID:8809585

  7. "Interactive Classification Technology"

    NASA Technical Reports Server (NTRS)

    deBessonet, Cary

    1999-01-01

    The investigators are upgrading a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the technologies to be used in automated reasoning and interactive classification systems. The overall goals of the project are: a) the enhancement of the representation language SL to accommodate multiple perspectives and a wider range of meaning; b) the development of a sufficient set of operators to enable the interpreter of SL to handle representations of basic cognitive acts; and c) the development of a default inference scheme to operate over SL notation as it is encoded. As to particular goals the first-year work plan focused on inferencing and.representation issues, including: 1) the development of higher level cognitive/ classification functions and conceptual models for use in inferencing and decision making; 2) the specification of a more detailed scheme of defaults and the enrichment of SL notation to accommodate the scheme; and 3) the adoption of additional perspectives for inferencing.

  8. Holistic facial expression classification

    NASA Astrophysics Data System (ADS)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  9. Waste classification sampling plan

    SciTech Connect

    Landsman, S.D.

    1998-05-27

    The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998.

  10. PSC: protein surface classification

    PubMed Central

    Tseng, Yan Yuan; Li, Wen-Hsiung

    2012-01-01

    We recently proposed to classify proteins by their functional surfaces. Using the structural attributes of functional surfaces, we inferred the pairwise relationships of proteins and constructed an expandable database of protein surface classification (PSC). As the functional surface(s) of a protein is the local region where the protein performs its function, our classification may reflect the functional relationships among proteins. Currently, PSC contains a library of 1974 surface types that include 25 857 functional surfaces identified from 24 170 bound structures. The search tool in PSC empowers users to explore related surfaces that share similar local structures and core functions. Each functional surface is characterized by structural attributes, which are geometric, physicochemical or evolutionary features. The attributes have been normalized as descriptors and integrated to produce a profile for each functional surface in PSC. In addition, binding ligands are recorded for comparisons among homologs. PSC allows users to exploit related binding surfaces to reveal the changes in functionally important residues on homologs that have led to functional divergence during evolution. The substitutions at the key residues of a spatial pattern may determine the functional evolution of a protein. In PSC (http://pocket.uchicago.edu/psc/), a pool of changes in residues on similar functional surfaces is provided. PMID:22669905

  11. The Transporter Classification Database

    PubMed Central

    Saier, Milton H.; Reddy, Vamsee S.; Tamang, Dorjee G.; Västermark, Åke

    2014-01-01

    The Transporter Classification Database (TCDB; http://www.tcdb.org) serves as a common reference point for transport protein research. The database contains more than 10 000 non-redundant proteins that represent all currently recognized families of transmembrane molecular transport systems. Proteins in TCDB are organized in a five level hierarchical system, where the first two levels are the class and subclass, the second two are the family and subfamily, and the last one is the transport system. Superfamilies that contain multiple families are included as hyperlinks to the five tier TC hierarchy. TCDB includes proteins from all types of living organisms and is the only transporter classification system that is both universal and recognized by the International Union of Biochemistry and Molecular Biology. It has been expanded by manual curation, contains extensive text descriptions providing structural, functional, mechanistic and evolutionary information, is supported by unique software and is interconnected to many other relevant databases. TCDB is of increasing usefulness to the international scientific community and can serve as a model for the expansion of database technologies. This manuscript describes an update of the database descriptions previously featured in NAR database issues. PMID:24225317

  12. Molecular classification of gliomas.

    PubMed

    Masui, Kenta; Mischel, Paul S; Reifenberger, Guido

    2016-01-01

    The identification of distinct genetic and epigenetic profiles in different types of gliomas has revealed novel diagnostic, prognostic, and predictive molecular biomarkers for refinement of glioma classification and improved prediction of therapy response and outcome. Therefore, the new (2016) World Health Organization (WHO) classification of tumors of the central nervous system breaks with the traditional principle of diagnosis based on histologic criteria only and incorporates molecular markers. This will involve a multilayered approach combining histologic features and molecular information in an "integrated diagnosis". We review the current state of diagnostic molecular markers for gliomas, focusing on isocitrate dehydrogenase 1 or 2 (IDH1/IDH2) gene mutation, α-thalassemia/mental retardation syndrome X-linked (ATRX) gene mutation, 1p/19q co-deletion and telomerase reverse transcriptase (TERT) promoter mutation in adult tumors, as well as v-raf murine sarcoma viral oncogene homolog B1 (BRAF) and H3 histone family 3A (H3F3A) aberrations in pediatric gliomas. We also outline prognostic and predictive molecular markers, including O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation, and discuss the potential clinical relevance of biologic glioblastoma subtypes defined by integration of multiomics data. Commonly used methods for individual marker detection as well as novel large-scale DNA methylation profiling and next-generation sequencing approaches are discussed. Finally, we illustrate how advances in molecular diagnostics affect novel strategies of targeted therapy, thereby raising new challenges and identifying new leads for personalized treatment of glioma patients. PMID:26948350

  13. Use of manual densitometry in land cover classification

    NASA Technical Reports Server (NTRS)

    Jordan, D. C.; Graves, D. H.; Hammetter, M. C.

    1978-01-01

    Through use of manual spot densitometry values derived from multitemporal 1:24,000 color infrared aircraft photography, areas as small as one hectare in the Cumberland Plateau in Kentucky were accurately classified into one of eight ground cover groups. If distinguishing between undisturbed and disturbed forest areas is the sole criterion of interest, classification results are highly accurate if based on imagery taken during foliated ground cover conditions. Multiseasonal imagery analysis was superior to single data analysis, and transparencies from prefoliated conditions gave better separation of conifers and hardwoods than did those from foliated conditions.

  14. Correction of Alar Retraction Based on Frontal Classification.

    PubMed

    Kim, Jae Hoon; Song, Jin Woo; Park, Sung Wan; Bartlett, Erica; Nguyen, Anh H

    2015-11-01

    Among the various types of alar deformations in Asians, alar retraction not only has the highest occurrence rate, but is also very complicated to treat because the ala is supported only by cartilage and its soft tissue envelope cannot be easily stretched. As patients' knowledge of aesthetic procedures is becoming more extensive due to increased information dissemination through various media, doctors must give more accurate, logical explanations of the procedures to be performed and their anticipated results, with an emphasis on relevant anatomical features, accurate diagnoses, detailed classifications, and various appropriate methods of surgery. PMID:26648808

  15. Hydrologic Landscape Regionalisation Using Deductive Classification and Random Forests

    PubMed Central

    Brown, Stuart C.; Lester, Rebecca E.; Versace, Vincent L.; Fawcett, Jonathon; Laurenson, Laurie

    2014-01-01

    Landscape classification and hydrological regionalisation studies are being increasingly used in ecohydrology to aid in the management and research of aquatic resources. We present a methodology for classifying hydrologic landscapes based on spatial environmental variables by employing non-parametric statistics and hybrid image classification. Our approach differed from previous classifications which have required the use of an a priori spatial unit (e.g. a catchment) which necessarily results in the loss of variability that is known to exist within those units. The use of a simple statistical approach to identify an appropriate number of classes eliminated the need for large amounts of post-hoc testing with different number of groups, or the selection and justification of an arbitrary number. Using statistical clustering, we identified 23 distinct groups within our training dataset. The use of a hybrid classification employing random forests extended this statistical clustering to an area of approximately 228,000 km2 of south-eastern Australia without the need to rely on catchments, landscape units or stream sections. This extension resulted in a highly accurate regionalisation at both 30-m and 2.5-km resolution, and a less-accurate 10-km classification that would be more appropriate for use at a continental scale. A smaller case study, of an area covering 27,000 km2, demonstrated that the method preserved the intra- and inter-catchment variability that is known to exist in local hydrology, based on previous research. Preliminary analysis linking the regionalisation to streamflow indices is promising suggesting that the method could be used to predict streamflow behaviour in ungauged catchments. Our work therefore simplifies current classification frameworks that are becoming more popular in ecohydrology, while better retaining small-scale variability in hydrology, thus enabling future attempts to explain and visualise broad-scale hydrologic trends at the scale of

  16. Branch classification: A new mechanism for improving branch predictor performance

    SciTech Connect

    Chang, P.Y.; Hao, E.; Patt, Y.; Yeh, T.Y.

    1996-04-01

    There is wide agreement that one of the most significant impediments to the performance of current and future pipelined superscalar processors is the presence of conditional branches in the instruction stream. Speculative execution is one solution to the branch problem, but speculative work is discarded if a branch is mispredicted. For it to be effective, speculative work is discarded if a branch is mispredicted. For it to be effective, speculative execution requires a very accurate branch predictor; 95% accuracy is not good enough. This paper proposes branch classification, a methodology for building more accurate branch predictors. Branch classification allows an individual branch instruction to be associated with the branch predictor best suited to predict its direction. Using this approach, a hybrid branch predictor can be constructed such that each component branch predictor predicts those branches for which it is best suited. To demonstrate the usefulness of branch classification, an example classification scheme is given and a new hybrid predictor is built based on this scheme which achieves a higher prediction accuracy than any branch predictor previously reported in the literature.

  17. Reconstruction-classification method for quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Malone, Emma; Powell, Samuel; Cox, Ben T.; Arridge, Simon

    2015-12-01

    We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in two and three dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches.

  18. SPIDERz: SuPport vector classification for IDEntifying Redshifts

    NASA Astrophysics Data System (ADS)

    Jones, Evan; Singal, J.

    2016-08-01

    SPIDERz (SuPport vector classification for IDEntifying Redshifts) applies powerful support vector machine (SVM) optimization and statistical learning techniques to custom data sets to obtain accurate photometric redshift (photo-z) estimations. It is written for the IDL environment and can be applied to traditional data sets consisting of photometric band magnitudes, or alternatively to data sets with additional galaxy parameters (such as shape information) to investigate potential correlations between the extra galaxy parameters and redshift.

  19. Criminal exposure.

    PubMed

    1999-09-01

    In August, an HIV-positive man plead guilty to sexually assaulting a 14-year-old boy. The sleeping boy awoke to find [name removed] sexually assaulting him, while watching a pornographic video. [Name removed] plead guilty to the assault with intent to rape a child. In addition, [name removed] received three counts of indecent assault and battery on a child, and exposure of pornographic material to a minor. [Name removed] will remain on probation for five years, although the prosecution had recommended sentencing [name removed] to four or five years in prison. The boy continues to be tested for HIV. PMID:11366904

  20. 7 CFR 27.34 - Classification procedure.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification procedure. 27.34 Section 27.34... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.34 Classification procedure. Classification shall proceed as rapidly as possible, but...

  1. 5 CFR 9901.221 - Classification requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SECURITY PERSONNEL SYSTEM (NSPS) Classification Classification Process § 9901.221 Classification... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Classification requirements. 9901.221... jobs as he or she considers necessary for the purpose of this section. (d) A classification action...

  2. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ. PMID:26510657

  3. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  4. [Systematic classification and community research techniques of arbuscular mycorrhizal fungi: a review].

    PubMed

    Liu, Yong-Jun; Feng, Hu-Yuan

    2010-06-01

    Arbuscular mycorrhizal fungi (AMF) are an important component of natural ecosystem, being able to form symbiont with plant roots. The traditional AMF classification is mainly based on the morphological identification of soil asexual spores, which has some limitations in the taxonomy of AMF. Advanced molecular techniques make the classification of AMF more accurate and scientific, and can improve the taxonomy of AMF established on the basis of morphological identification. The community research of AMF is mainly based on species classification, and has two kinds of investigation methods, i. e., spores morphological identification and molecular analysis. This paper reviewed the research progress in the systematic classification and community research techniques of AMF, with the focus on the molecular techniques in community analysis of AMF. It was considered that using morphological and molecular methods together would redound to the accurate investigation of AMF community, and also, facilitate the improvement of AMF taxonomy. PMID:20873637

  5. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  6. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  7. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  8. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  9. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  10. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  11. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  12. 14 CFR 1203.501 - Applying derivative classification markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... INFORMATION SECURITY PROGRAM Derivative Classification § 1203.501 Applying derivative classification markings... classification decisions: (b) Verify the information's current level of classification so far as...

  13. ALHAMBRA survey: morphological classification

    NASA Astrophysics Data System (ADS)

    Pović, M.; Huertas-Company, M.; Márquez, I.; Masegosa, J.; Aguerri, J. A. López; Husillos, C.; Molino, A.; Cristóbal-Hornillos, D.

    2015-03-01

    The Advanced Large Homogeneous Area Medium Band Redshift Astronomical (ALHAMBRA) survey is a photometric survey designed to study systematically cosmic evolution and cosmic variance (Moles et al. 2008). It employs 20 continuous medium-band filters (3500 - 9700 Å), plus JHK near-infrared (NIR) bands, which enable measurements of photometric redshifts with good accuracy. ALHAMBRA covers > 4 deg2 in eight discontinuous regions (~ 0.5 deg2 per region), of theseseven fields overlap with other extragalactic, multiwavelength surveys (DEEP2, SDSS, COSMOS, HDF-N, Groth, ELAIS-N1). We detect > 600.000 sources, reaching the depth of R(AB) ~ 25.0, and photometric accuracy of 2-4% (Husillos et al., in prep.). Photometric redshifts are measured using the Bayesian Photometric Redshift (BPZ) code (Benítez et al. 2000), reaching one of the best accuracies up to date of δz/z <= 1.2% (Molino et al., in prep.). To deal with the morphological classification of galaxies in the ALHAMBRA survey (Pović et al., in prep.), we used the galaxy Support Vector Machine code (galSVM; Huertas-Company 2008, 2009), one of the new non-parametric methods for morphological classification, specially useful when dealing with low resolution and high-redshift data. To test the accuracy of our morphological classification we used a sample of 3000 local, visually classified galaxies (Nair & Abraham 2010), moving them to conditions typical of our ALHAMBRA data (taking into account the background, redshift and magnitude distributions, etc.), and measuring their morphology using galSVM. Finally, we measured the morphology of ALHAMBRA galaxies, obtaining for each source seven morphological parameters (two concentration indexes, asymmetry, Gini, M20 moment of light, smoothness, and elongation), probability if the source belongs to early- or late-type, and its error. Comparing ALHAMBRA morph COSMOS/ACS morphology (obtained with the same method) we expect to have qualitative separation in two main morphological

  14. Nocturnal lagophthalmos: an overview and classification.

    PubMed

    Latkany, Robert L; Lock, Barbara; Speaker, Mark

    2006-01-01

    Nocturnal lagophthalmos is the inability to close the eyelids during sleep. Lagophthalmos is associated with exposure keratopathy, poor sleep, and persistent exposure-related symptoms. There are a variety of causes of lagophthalmos, grouped as proptosis/eye exposure etiologies and palpebral insufficiency etiologies. Although obvious lagophthalmos is usually detected, it is sometimes difficult to recognize obscure lagophthalmos, due either to eyelash obstruction or overhang of the upper lid anterior and inferior to the most superior portion of the lower lid in a closed position. We present a novel classification system and illustrations of obvious and obscure lagophthalmos. A diagnosis can usually be made with a focused history and slit lamp examination. Treatment is multipronged and may include minor procedures or ocular surgery to correct the lid malposition; natural, topical or oral agents; and punctal plugs to manage ocular surface effects. Correct and timely diagnosis allows greater opportunity for relief of patient suffering and prevention of severe ocular surface pathology, as well as educated planning for future ocular surgical procedures. PMID:16671223

  15. A Functional Classification of Occupations.

    ERIC Educational Resources Information Center

    McKinlay, Donald Bruce

    The need for more and better manpower information is hampered by the lack of adequate occupational data classification systems. The diversity of interests in occupations probably accounts for the absence of consensus regarding either the general outlines or the specific details of a standardized occupational classification system which would…

  16. A New Classification of Sandstone.

    ERIC Educational Resources Information Center

    Brewer, Roger Clay; And Others

    1990-01-01

    Introduced is a sandstone classification scheme intended for use with thin-sections and hand specimens. Detailed is a step-by-step classification scheme. A graphic presentation of the scheme is presented. This method is compared with other existing schemes. (CW)

  17. Applications of feature selection. [development of classification algorithms for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1976-01-01

    The use of satellite-acquired (LANDSAT) multispectral scanner (MSS) data to conduct an inventory of some crop of economic interest such as wheat over a large geographical area is considered in relation to the development of accurate and efficient algorithms for data classification. The dimension of the measurement space and the computational load for a classification algorithm is increased by the use of multitemporal measurements. Feature selection/combination techniques used to reduce the dimensionality of the problem are described.

  18. Predictive Classification Trees

    NASA Astrophysics Data System (ADS)

    Dlugosz, Stephan; Müller-Funk, Ulrich

    CART (Breiman et al., Classification and Regression Trees, Chapman and Hall, New York, 1984) and (exhaustive) CHAID (Kass, Appl Stat 29:119-127, 1980) figure prominently among the procedures actually used in data based management, etc. CART is a well-established procedure that produces binary trees. CHAID, in contrast, admits multiple splittings, a feature that allows to exploit the splitting variable more extensively. On the other hand, that procedure depends on premises that are questionable in practical applications. This can be put down to the fact that CHAID relies on simultaneous Chi-Square- resp. F-tests. The null-distribution of the second test statistic, for instance, relies on the normality assumption that is not plausible in a data mining context. Moreover, none of these procedures - as implemented in SPSS, for instance - take ordinal dependent variables into account. In the paper we suggest an alternative tree-algorithm that: Requires explanatory categorical variables

  19. Classification-based reasoning

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando; Segami, Carlos

    1991-01-01

    A representation formalism for N-ary relations, quantification, and definition of concepts is described. Three types of conditions are associated with the concepts: (1) necessary and sufficient properties, (2) contingent properties, and (3) necessary properties. Also explained is how complex chains of inferences can be accomplished by representing existentially quantified sentences, and concepts denoted by restrictive relative clauses as classification hierarchies. The representation structures that make possible the inferences are explained first, followed by the reasoning algorithms that draw the inferences from the knowledge structures. All the ideas explained have been implemented and are part of the information retrieval component of a program called Snowy. An appendix contains a brief session with the program.

  20. Bayesian classification theory

    NASA Technical Reports Server (NTRS)

    Hanson, Robin; Stutz, John; Cheeseman, Peter

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.

  1. Classification of personality disorder.

    PubMed

    Tyrer, P; Alexander, J

    1979-08-01

    An interview schedule was used to record the personality traits of 130 psychiatric patients, 65 with a primary clinical diagnosis of personality disorder and 65 with other diagnoses. The results were analysed by factor analysis and three types of cluster analysis. Factor analysis showed a similar structure of personality variables in both groups of patients, supporting the notion that personality disorders differ only in degree from the personalities of other psychiatric patients. Cluster analysis revealed five discrete categories; sociopathic, passive-dependent, anankastic, schizoid and a non-personality-disordered group. Of all the personality-disordered patients 63 per cent fell into the passive-dependent or sociopathic category. The results suggest that the current classification of personality disorder could be simplified. PMID:497619

  2. Robust Vertex Classification.

    PubMed

    Chen, Li; Shen, Cencheng; Vogelstein, Joshua T; Priebe, Carey E

    2016-03-01

    For random graphs distributed according to stochastic blockmodels, a special case of latent position graphs, adjacency spectral embedding followed by appropriate vertex classification is asymptotically Bayes optimal; but this approach requires knowledge of and critically depends on the model dimension. In this paper, we propose a sparse representation vertex classifier which does not require information about the model dimension. This classifier represents a test vertex as a sparse combination of the vertices in the training set and uses the recovered coefficients to classify the test vertex. We prove consistency of our proposed classifier for stochastic blockmodels, and demonstrate that the sparse representation classifier can predict vertex labels with higher accuracy than adjacency spectral embedding approaches via both simulation studies and real data experiments. Our results demonstrate the robustness and effectiveness of our proposed vertex classifier when the model dimension is unknown. PMID:26340770

  3. Unsupervised hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Jiao, Xiaoli; Chang, Chein-I.

    2007-09-01

    Two major issues encountered in unsupervised hyperspectral image classification are (1) how to determine the number of spectral classes in the image and (2) how to find training samples that well represent each of spectral classes without prior knowledge. A recently developed concept, Virtual dimensionality (VD) is used to estimate the number of spectral classes of interest in the image data. This paper proposes an effective algorithm to generate an appropriate training set via a recently developed Prioritized Independent Component Analysis (PICA). Two sets of hyperspectral data, Airborne Visible Infrared Imaging Spectrometer (AVIRIS) Cuprite data and HYperspectral Digital Image Collection Experiment (HYDICE) data are used for experiments and performance analysis for the proposed method.

  4. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  5. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  6. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  7. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  8. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  9. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  10. Applications of neural networks to radar image classification

    SciTech Connect

    Hara, Yoshihisa; Atkins, R.G.; Yueh, S.H.; Shin, R.T.; Kong, J.A. )

    1994-01-01

    Classification of terrain cover using polarimetric radar is an area of considerable current interest and research. A number of methods have been developed to classify ground terrain types from fully polarimetric synthetic aperture radar (SAR) images, and these techniques are often grouped into supervised and unsupervised approaches. Supervised methods have yielded higher accuracy than unsupervised techniques, but suffer from the need for human interaction to determine classes and training regions. In contrast, unsupervised methods determine classes automatically, but generally show limited ability to accurately divide terrain into natural classes. In this paper, a new terrain classification technique is introduced to determine terrain classes in polarimetric SAR images, utilizing unsupervised neural networks to provide automatic classification, and employing an iterative algorithm to improve the performance. Several types of unsupervised neural networks are first applied to the classification of SAR images, and the results are compared to those of more conventional unsupervised methods. Results show that one neural network method--Learning Vector Quantization (LVQ)--outperforms the conventional unsupervised classifiers, but is still inferior to supervised methods. To overcome this poor accuracy, an iterative algorithm is proposed where the SAR image is reclassified using Maximum Likelihood (ML) classifier. It is shown that this algorithm converges, and significantly improves classification accuracy. Performance after convergence is seen to be comparable to that obtained with a supervised ML classifier, while maintaining the advantages of an unsupervised technique.

  11. An Efficient Ensemble Learning Method for Gene Microarray Classification

    PubMed Central

    Shadgar, Bita

    2013-01-01

    The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost. PMID:24024194

  12. Automatic classification of time-variable X-ray sources

    SciTech Connect

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  13. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  14. Cardiac arrhythmia classification using multi-modal signal analysis.

    PubMed

    Kalidas, V; Tamil, L S

    2016-08-01

    In this paper, as a contribution to the Physionet/Computing in Cardiology 2015 Challenge, we present individual algorithms to accurately classify five different life threatening arrhythmias with the goal of suppressing false alarm generation in intensive care units. Information obtained by analysing electrocardiogram, photoplethysmogram and arterial blood pressure signals was utilized to develop the classification models. Prior to classification, the signals were subject to a signal pre-processing stage for quality analysis. Classification was performed using a combination of support vector machine based machine learning approach and logical analysis techniques. The predicted result for a certain arrhythmia classification model was verified by logical analysis to aid in reduction of false alarms. Separate feature vectors were formed for predicting the presence or absence of each arrhythmia, using both spectral and time-domain information. The training and test data were obtained from the Physionet/CinC Challenge 2015 database. Classification algorithms were written for two different categories of data, namely real-time and retrospective, whose data lengths were 10 s and an additional 30 s, respectively. For the real-time test dataset, sensitivity of 94% and specificity of 82% were obtained. Similarly, for the retrospective test dataset, sensitivity of 94% and specificity of 86% were obtained. PMID:27454417

  15. Robotic Rock Classification

    NASA Technical Reports Server (NTRS)

    Hebert, Martial

    1999-01-01

    This report describes a three-month research program undertook jointly by the Robotics Institute at Carnegie Mellon University and Ames Research Center as part of the Ames' Joint Research Initiative (JRI.) The work was conducted at the Ames Research Center by Mr. Liam Pedersen, a graduate student in the CMU Ph.D. program in Robotics under the supervision Dr. Ted Roush at the Space Science Division of the Ames Research Center from May 15 1999 to August 15, 1999. Dr. Martial Hebert is Mr. Pedersen's research adviser at CMU and is Principal Investigator of this Grant. The goal of this project is to investigate and implement methods suitable for a robotic rover to autonomously identify rocks and minerals in its vicinity, and to statistically characterize the local geological environment. Although primary sensors for these tasks are a reflection spectrometer and color camera, the goal is to create a framework under which data from multiple sensors, and multiple readings on the same object, can be combined in a principled manner. Furthermore, it is envisioned that knowledge of the local area, either a priori or gathered by the robot, will be used to improve classification accuracy. The key results obtained during this project are: The continuation of the development of a rock classifier; development of theoretical statistical methods; development of methods for evaluating and selecting sensors; and experimentation with data mining techniques on the Ames spectral library. The results of this work are being applied at CMU, in particular in the context of the Winter 99 Antarctica expedition in which the classification techniques will be used on the Nomad robot. Conversely, the software developed based on those techniques will continue to be made available to NASA Ames and the data collected from the Nomad experiments will also be made available.

  16. Classification of Rainbows

    NASA Astrophysics Data System (ADS)

    Ricard, J. L.; Peter, A. L.; Barckicke, J.

    2015-12-01

    CLASSIFICATION OF RAINBOWS Jean Louis Ricard,1,2,* Peter Adams ,2 and Jean Barckicke 2,3 1CNRM, Météo-France,42 Avenue Gaspard Coriolis, 31057 Toulouse, France 2CEPAL, 148 Himley Road, Dudley, West Midlands DY1 2QH, United Kingdom 3DP/Compas,Météo-France,42 Avenue Gaspard Coriolis, 31057 Toulouse, France *Corresponding author: Dr_Jean_Ricard@yahoo,co,ukRainbows are the most beautiful and most spectacular optical atmospheric phenomenon. Humphreys (1964) pointedly noted that "the "explanations" generally given of the rainbow [ in textbooks] may well be said to explain beautifully that which does not occur, and to leave unexplained which does" . . . "The records of close observations of rainbows soon show that not even the colors are always the same". Textbooks stress that the main factor affecting the aspect of the rainbow is the radius of the water droplets. In his well-known textbook entitled "the nature of light & colour in the open air", Minnaert (1954) gives the chief features of the rainbow depending on the diameter of the drops producing it. For this study, we have gathered hundreds of pictures of primary bows. We sort out the pictures into classes. The classes are defined in a such way that rainbows belonging to the same class look similar. Our results are surprising and do not confirm Minnaert's classification. In practice, the size of the water droplets is only a minor factor controlling the overall aspect of the rainbow. The main factor appears to be the height of the sun above the horizon. At sunset, the width of the red band increases, while the width of the other bands of colours decreases. The orange, the violet, the blue and the green bands disappear completely in this order. At the end, the primary bow is mainly red and slightly yellow. Picture = Contrast-enhanced photograph of a primary bow picture (prepared by Andrew Dunn).

  17. Vietnamese Document Representation and Classification

    NASA Astrophysics Data System (ADS)

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.

  18. [Silica classification and labelling under REACH and CLP regulation].

    PubMed

    Coggiola, M; Baracco, A; Perrelli, F; Sorasio, D

    2011-01-01

    Following literature data the evaluation CLP classification and labelling criteria doesn't support the decision to classify respirable crystalline silica (RCS) as carcinogen and suggests to consider RCS as:--STOT (Specific Target Organ Toxicant) RE (Repeated Exposure) cat.l by inhalation in case of concentration > or = 10%--STOT RE cat. 2 by inhalation in case of concentration 1 divided by 10%--Not classifiable as dangerous for human in case of concentration < 1%. PMID:23393794

  19. Lead exposure in US worksites: A literature review and development of an occupational lead exposure database from the published literature

    PubMed Central

    Koh, Dong-Hee; Locke, Sarah J.; Chen, Yu-Cheng; Purdue, Mark P.; Friesen, Melissa C.

    2016-01-01

    Background Retrospective exposure assessment of occupational lead exposure in population-based studies requires historical exposure information from many occupations and industries. Methods We reviewed published US exposure monitoring studies to identify lead exposure measurement data. We developed an occupational lead exposure database from the 175 identified papers containing 1,111 sets of lead concentration summary statistics (21% area air, 47% personal air, 32% blood). We also extracted ancillary exposure-related information, including job, industry, task/location, year collected, sampling strategy, control measures in place, and sampling and analytical methods. Results Measurements were published between 1940 and 2010 and represented 27 2-digit standardized industry classification codes. The majority of the measurements were related to lead-based paint work, joining or cutting metal using heat, primary and secondary metal manufacturing, and lead acid battery manufacturing. Conclusions This database can be used in future statistical analyses to characterize differences in lead exposure across time, jobs, and industries. PMID:25968240

  20. Multisite functional connectivity MRI classification of autism: ABIDE results

    PubMed Central

    Nielsen, Jared A.; Zielinski, Brandon A.; Fletcher, P. Thomas; Alexander, Andrew L.; Lange, Nicholas; Bigler, Erin D.; Lainhart, Janet E.; Anderson, Jeffrey S.

    2013-01-01

    Background: Systematic differences in functional connectivity MRI metrics have been consistently observed in autism, with predominantly decreased cortico-cortical connectivity. Previous attempts at single subject classification in high-functioning autism using whole brain point-to-point functional connectivity have yielded about 80% accurate classification of autism vs. control subjects across a wide age range. We attempted to replicate the method and results using the Autism Brain Imaging Data Exchange (ABIDE) including resting state fMRI data obtained from 964 subjects and 16 separate international sites. Methods: For each of 964 subjects, we obtained pairwise functional connectivity measurements from a lattice of 7266 regions of interest covering the gray matter (26.4 million “connections”) after preprocessing that included motion and slice timing correction, coregistration to an anatomic image, normalization to standard space, and voxelwise removal by regression of motion parameters, soft tissue, CSF, and white matter signals. Connections were grouped into multiple bins, and a leave-one-out classifier was evaluated on connections comprising each set of bins. Age, age-squared, gender, handedness, and site were included as covariates for the classifier. Results: Classification accuracy significantly outperformed chance but was much lower for multisite prediction than for previous single site results. As high as 60% accuracy was obtained for whole brain classification, with the best accuracy from connections involving regions of the default mode network, parahippocampaland fusiform gyri, insula, Wernicke Area, and intraparietal sulcus. The classifier score was related to symptom severity, social function, daily living skills, and verbal IQ. Classification accuracy was significantly higher for sites with longer BOLD imaging times. Conclusions: Multisite functional connectivity classification of autism outperformed chance using a simple leave-one-out classifier

  1. Retinal connectomics: towards complete, accurate networks.

    PubMed

    Marc, Robert E; Jones, Bryan W; Watt, Carl B; Anderson, James R; Sigulinsky, Crystal; Lauritzen, Scott

    2013-11-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 10(12)-10(15) byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies of complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  2. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  3. Land salinization classification method using Landsat TM in western Jilin Province of China

    NASA Astrophysics Data System (ADS)

    Fu, Haoyang; Gu, Lingjia; Ren, Ruizhi; Sun, Jian

    2014-10-01

    In the study of land salinization classification, researchers are most concerned about the distribution, area and degree of salinization. Traditional classification methods of land salinization only manually extract the sample data from the study area, which cannot obtain the classification results for large area. With the development of remote sensing technology, remote sensing data is often used to extract and analyze the information of saline-alkali soil. At present, most classification methods of land salinization utilize the spectral information of remote sensing data based on supervised classification or unsupervised classification, which still have some errors in the classification results. Combining the sample data with Landsat TM images, the Western Jilin Province of China was selected as the study area in this paper. Through analyzing the relationship between the spectral characteristics and the content of soil salinity of the sample data extracted from different types of saline-alkali land, a land salinization classification method using the decision tree was proposed. The experimental results demonstrated that the proposed method can supply more accurate classification information of land salinization, and further effectively monitor soil salinization changes for the study area.

  4. Perspectives on next steps in classification of oro-facial pain - part 1: role of ontology.

    PubMed

    Ceusters, W; Michelotti, A; Raphael, K G; Durham, J; Ohrbach, R

    2015-12-01

    The purpose of this study was to review existing principles of oro-facial pain classifications and to specify design recommendations for a new system that would reflect recent insights in biomedical classification systems, terminologies and ontologies. The study was initiated by a symposium organised by the International RDC/TMD Consortium Network in March 2013, to which the present authors contributed. The following areas are addressed: problems with current classification approaches, status of the ontological basis of pain disorders, insufficient diagnostic aids and biomarkers for pain disorders, exploratory nature of current pain terminology and classification systems, and problems with prevailing classification methods from an ontological perspective. Four recommendations for addressing these problems are as follows: (i) develop a hypothesis-driven classification structure built on principles that ensure to our best understanding an accurate description of the relations among all entities involved in oro-facial pain disorders; (ii) take into account the physiology and phenomenology of oro-facial pain disorders to adequately represent both domains including psychosocial entities in a classification system; (iii) plan at the beginning for field-testing at strategic development stages; and (iv) consider how the classification system will be implemented. Implications in relation to the specific domains of psychosocial factors and biomarkers for inclusion into an oro-facial pain classification system are described in two separate papers. PMID:26212927

  5. 5 CFR 1312.7 - Derivative classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.7 Derivative classification. A derivative classification means that the information is in substance the same information that is currently classified, usually...

  6. 5 CFR 1312.7 - Derivative classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.7 Derivative classification. A derivative classification means that the information is in substance the same information that is currently classified, usually...

  7. 5 CFR 1312.7 - Derivative classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.7 Derivative classification. A derivative classification means that the information is in substance the same information that is currently classified, usually...

  8. 5 CFR 1312.7 - Derivative classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.7 Derivative classification. A derivative classification means that the information is in substance the same information that is currently classified, usually...

  9. 5 CFR 1312.7 - Derivative classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.7 Derivative classification. A derivative classification means that the information is in substance the same information that is currently classified, usually...

  10. [The study of M dwarf spectral classification].

    PubMed

    Yi, Zhen-Ping; Pan, Jing-Chang; Luo, A-Li

    2013-08-01

    As the most common stars in the galaxy, M dwarfs can be used to trace the structure and evolution of the Milky Way. Besides, investigating M dwarfs is important for searching for habitability of extrasolar planets orbiting M dwarfs. Spectral classification of M dwarfs is a fundamental work. The authors used DR7 M dwarf sample of SLOAN to extract important features from the range of 600-900 nm by random forest method. Compared to the features used in Hammer Code, the authors added three new indices. Our test showed that the improved Hammer with new indices is more accurate. Our method has been applied to classify M dwarf spectra of LAMOST. PMID:24159887

  11. Hierarchical cluster analysis applied to workers' exposures in fiberglass insulation manufacturing.

    PubMed

    Wu, J D; Milton, D K; Hammond, S K; Spear, R C

    1999-01-01

    The objectives of this study were to explore the application of cluster analysis to the characterization of multiple exposures in industrial hygiene practice and to compare exposure groupings based on the result from cluster analysis with that based on non-measurement-based approaches commonly used in epidemiology. Cluster analysis was performed for 37 workers simultaneously exposed to three agents (endotoxin, phenolic compounds and formaldehyde) in fiberglass insulation manufacturing. Different clustering algorithms, including complete-linkage (or farthest-neighbor), single-linkage (or nearest-neighbor), group-average and model-based clustering approaches, were used to construct the tree structures from which clusters can be formed. Differences were observed between the exposure clusters constructed by these different clustering algorithms. When contrasting the exposure classification based on tree structures with that based on non-measurement-based information, the results indicate that the exposure clusters identified from the tree structures had little in common with the classification results from either the traditional exposure zone or the work group classification approach. In terms of the defining homogeneous exposure groups or from the standpoint of health risk, some toxicological normalization in the components of the exposure vector appears to be required in order to form meaningful exposure groupings from cluster analysis. Finally, it remains important to see if the lack of correspondence between exposure groups based on epidemiological classification and measurement data is a peculiarity of the data or a more general problem in multivariate exposure analysis. PMID:10028893

  12. A proposed classification of veterinary epidemiosurveillance networks.

    PubMed

    Dufour, B; Audigé, L

    1997-12-01

    With the signing of the General Agreement on Tariffs and Trade and the establishment of the World Trade Organisation, the trading environment for animal products has changed. Disease control measures can no longer be applied as trade barriers unless supported by scientific epidemiological data. In this context, it has become necessary, if not obligatory, to gain accurate and up-to-date knowledge about the epidemiological status of important infectious animal diseases. The role of veterinary epidemiosurveillance networks is therefore gaining importance. Furthermore, epidemiosurveillance contributes to the protection of animal populations from exotic or emerging diseases, as well as to the development and evaluation of disease control programmes. Despite the large diversity of surveillance networks, the authors propose a method of network classification. The criteria for classification are as follows: the type of disease being monitored (i.e., surveillance of exotic versus endemic diseases). The number of diseases concerned (i.e., focused networks versus broad-based networks). The area being covered (i.e., local, national or international networks). The population being monitored (i.e., whether the network is targeted at suspect or susceptible animals). The sampling strategy of the network (i.e., sample-based networks versus exhaustive networks). The method of collecting data (i.e., passive data collection versus active collection). The type of network management (i.e., autonomous management versus that which is integrated with other programmes). This classification is discussed and illustrated by examples published in the literature. It may aid in the future development of a grid for the evaluation of veterinary epidemiosurveillance networks. PMID:9567300

  13. A Simple Algorithm for Population Classification

    PubMed Central

    Hu, Peng; Hsieh, Ming-Hua; Lei, Ming-Jie; Cui, Bin; Chiu, Sung-Kay; Tzeng, Chi-Meng

    2016-01-01

    A single-nucleotide polymorphism (SNP) is a variation in the DNA sequence that occurs when a single nucleotide in the genome differs across members of the same species. Variations in the DNA sequences of humans are associated with human diseases. This makes SNPs as a key to open up the door of personalized medicine. SNP(s) can also be used for human identification and forensic applications. Compared to short tandem repeat (STR) loci, SNPs have much lower statistical testing power for individual recognition due to the fact that there are only 3 possible genotypes for each SNP marker, but it may provide sufficient information to identify the population to which a certain samples may belong. In this report, using eight SNP markers for 641 samples, we performed a standard statistical classification procedure and found that 86% of the samples could be classified accurately under a two-population model. This study suggests the potential use of SNP(s) in population classification with a small number (n ≤ 8) of genetic markers for forensic screening, biodiversity and disaster victim controlling. PMID:27030001

  14. Explosives Classifications Tracking System User Manual

    SciTech Connect

    Genoni, R.P.

    1993-10-01

    The Explosives Classification Tracking System (ECTS) presents information and data for U.S. Department of Energy (DOE) explosives classifications of interest to EM-561, Transportation Management Division, other DOE facilities, and contractors. It is intended to be useful to the scientist, engineer, and transportation professional, who needs to classify or transport explosives. This release of the ECTS reflects upgrading of the software which provides the user with an environment that makes comprehensive retrieval of explosives related information quick and easy. Quarterly updates will be provided to the ECTS throughout its development in FY 1993 and thereafter. The ECTS is a stand alone, single user system that contains unclassified, publicly available information, and administrative information (contractor names, product descriptions, transmittal dates, EX-Numbers, etc.) information from many sources for non-decisional engineering and shipping activities. The data is the most up-to-date and accurate available to the knowledge of the system developer. The system is designed to permit easy revision and updating as new information and data become available. These, additions and corrections are welcomed by the developer. This user manual is intended to help the user install, understand, and operate the system so that the desired information may be readily obtained, reviewed, and reported.

  15. EVALUATING EXCESS DIETARY EXPOSURE OF YOUNG CHILDREN EATING IN CONTAMINATED ENVIRONMENTS

    EPA Science Inventory

    The United States' Food Quality Protection Act of 1996 requires more accurate assessment of children's aggregate exposures to environmental contaminants. Since children have unstructured eating behaviors, their excess exposures, caused by eating activities, becomes an importan...

  16. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  17. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    PubMed

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result. PMID:26611006

  18. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  19. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  20. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  1. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  2. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  3. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  4. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  5. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  6. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  7. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  8. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  9. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  10. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  11. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  12. Environmental endocrine disruptors: A proposed classification scheme

    SciTech Connect

    Fur, P.L. de; Roberts, J.

    1995-12-31

    A number of chemicals known to act on animal systems through the endocrine system have been termed environmental endocrine disruptors. This group includes some of the PCBs and TCDDs, as well as lead, mercury and a large number of pesticides. The common feature is that the chemicals interact with endogenous endocrine systems at the cellular and/or molecular level to alter normal processes that are controlled or regulated by hormones. Although the existence of artificial or environmental estrogens (e.g. chlordecone and DES) has been known for some time, recent data indicate that this phenomenon is widespread. Indeed, anti-androgens have been held responsible for reproductive dysfunction in alligator populations in Florida. But the significance of endocrine disruption was recognized by pesticide manufacturers when insect growth regulators were developed to interfere with hormonal control of growth. Controlling, regulating or managing these chemicals depends in no small part on the ability to identify, screen or otherwise know that a chemical is an endocrine disrupter. Two possible classifications schemes are: using the effects caused in an animal, or animals as an exposure indicator; and using a known screen for the point of contact with the animal. The former would require extensive knowledge of cause and effect relationships in dozens of animal groups; the latter would require a screening tool comparable to an estrogen binding assay. The authors present a possible classification based on chemicals known to disrupt estrogenic, androgenic and ecdysone regulated hormonal systems.

  13. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  14. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  15. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  16. Target Decomposition Techniques & Role of Classification Methods for Landcover Classification

    NASA Astrophysics Data System (ADS)

    Singh, Dharmendra; Mittal, Gunjan

    Target decomposition techniques aims at analyzing the received scattering matrix from polari-metric data to extract information about the scattering processes. Incoherent techniques have been modeled in recent years for providing more general approach for decomposition of natural targets. Therefore, there is a need to study and critically analyze the developing models for their suitability in classification of land covers. Moreover, the classification methods used for the segmentation of various landcovers from the decomposition techniques need to be examined as the appropriate selection of these methods affect the performance of the decomposition tech-niques for landcover classification. Therefore in the present paper, it is attempted to check the performance of various model based and an eigen vector based decomposition techniques for decomposition of Polarimetric PALSAR (Phased array type L band SAR) data. Few generic supervised classifiers were used for classification of decomposed images into three broad classes of water, urban and agriculture lands. For the purpose, algorithms had been applied twice on pre-processed PALSAR raw data once on spatial averaged (mean filtering on 33 window) data and the other on data, multilooked in azimuth direction by six looks and then filtered using Wishart Gamma MAP on 55 window. Classification of the decomposed images from each of the methods had been done using four supervised classifiers (parallelepiped, minimum distance, Mahalanobis and maximum likelihood). Ground truth data generated with the help of ground survey points, topographic sheet and google earth was used for the computation of classification accuracy. Parallelepiped classifier gave better classification accuracy of water class for all the models excluding H/A/Alpha. Minimum distance classifier gave better classification results for urban class. Maximum likelihood classifier performed well as compared to other classifiers for classification of vegetation class

  17. Towards Automatic Classification of Neurons

    PubMed Central

    Armañanzas, Rubén; Ascoli, Giorgio A.

    2015-01-01

    The classification of neurons into types has been much debated since the inception of modern neuroscience. Recent experimental advances are accelerating the pace of data collection. The resulting information growth of morphological, physiological, and molecular properties encourages efforts to automate neuronal classification by powerful machine learning techniques. We review state-of-the-art analysis approaches and availability of suitable data and resources, highlighting prominent challenges and opportunities. The effective solution of the neuronal classification problem will require continuous development of computational methods, high-throughput data production, and systematic metadata organization to enable cross-lab integration. PMID:25765323

  18. Changing Patient Classification System for Hospital Reimbursement in Romania

    PubMed Central

    Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian

    2010-01-01

    Aim To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Methods Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). Results The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians’ knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Conclusion Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care. PMID:20564769

  19. Metabolism of pesticides after dermal exposure to amphibians

    EPA Science Inventory

    Understanding how pesticide exposure to non-target species influences toxicity is necessary to accurately assess the ecological risks these compounds pose. Aquatic, terrestrial, and arboreal amphibians are often exposed to pesticides during their agricultural application resultin...

  20. Exposure chamber

    DOEpatents

    Moss, Owen R.

    1980-01-01

    A chamber for exposing animals, plants, or materials to air containing gases or aerosols is so constructed that catch pans for animal excrement, for example, serve to aid the uniform distribution of air throughout the chamber instead of constituting obstacles as has been the case in prior animal exposure chambers. The chamber comprises the usual imperforate top, bottom and side walls. Within the chamber, cages and their associated pans are arranged in two columns. The pans are spaced horizontally from the walls of the chamber in all directions. Corresponding pans of the two columns are also spaced horizontally from each other. Preferably the pans of one column are also spaced vertically from corresponding pans of the other column. Air is introduced into the top of the chamber and withdrawn from the bottom. The general flow of air is therefore vertical. The effect of the horizontal pans is based on the fact that a gas flowing past the edge of a flat plate that is perpendicular to the flow forms a wave on the upstream side of the plate. Air flows downwardly between the chamber walls and the outer edges of the pan. It also flows downwardly between the inner edges of the pans of the two columns. It has been found that when the air carries aerosol particles, these particles are substantially uniformly distributed throughout the chamber.

  1. CASP9 Target Classification

    PubMed Central

    Kinch, Lisa N.; Shi, Shuoyong; Cheng, Hua; Cong, Qian; Pei, Jimin; Mariani, Valerio; Schwede, Torsten; Grishin, Nick V.

    2011-01-01

    The Critical Assessment of Protein Structure Prediction round 9 (CASP9) aimed to evaluate predictions for 129 experimentally determined protein structures. To assess tertiary structure predictions, these target structures were divided into domain-based evaluation units that were then classified into two assessment categories: template based modeling (TBM) and template free modeling (FM). CASP9 targets were split into domains of structurally compact evolutionary modules. For the targets with more than one defined domain, the decision to split structures into domains for evaluation was based on server performance. Target domains were categorized based on their evolutionary relatedness to existing templates as well as their difficulty levels indicated by server performance. Those target domains with sequence-related templates and high server prediction performance were classified as TMB, while those targets without identifiable templates and low server performance were classified as FM. However, using these generalizations for classification resulted in a blurred boundary between CASP9 assessment categories. Thus, the FM category included those domains without sequence detectable templates (25 target domains) as well as some domains with difficult to detect templates whose predictions were as poor as those without templates (5 target domains). Several interesting examples are discussed, including targets with sequence related templates that exhibit unusual structural differences, targets with homologous or analogous structure templates that are not detectable by sequence, and targets with new folds. PMID:21997778

  2. Classification of Rainbows

    NASA Astrophysics Data System (ADS)

    Adams, Peter; Ricard, Jean; Barckicke, Jean

    2016-04-01

    Rainbows are the most beautiful and most spectacular optical atmospheric phenomenon. Humphreys (1964) pointedly noted that "the "explanations" generally given of the rainbow [ in textbooks] may well be said to explain beautifully that which does not occur, and to leave unexplained which does" . . . "The records of close observations of rainbows soon show that not even the colors are always the same". Textbooks stress that the main factor affecting the aspect of the rainbow is the radius of the water droplets. In his well-known textbook entitled "the nature of light & colour in the open air", Minnaert (1954) gives the chief features of the rainbow depending on the diameter of the drops producing it. For this study, we have gathered hundreds of pictures of primary bows. We sort out the pictures into classes. The classes are defined in a such way that rainbows belonging to the same class look similar. Our results are surprising and do not confirm Minnaert's classification. In practice, the size of the water droplets is only a minor factor controlling the overall aspect of the rainbow. The main factor appears to be the height of the sun above the horizon. At sunset, the width of the red band increases, while the width of the other bands of colours decreases. The orange, the violet, the blue and the green bands disappear completely in this order. At the end, the primary bow is mainly red and slightly yellow. Picture = taken from the CNRM in Toulouse after a summer storm (Jean Ricard)

  3. SAR Ice Classification Using Fuzzy Screening Method

    NASA Astrophysics Data System (ADS)

    Gill, R. S.

    2003-04-01

    A semi-automatic SAR sea ice classification algorithm is described. It is based on combining the information in the original SAR data with those in the three 'image' products derived from it, namely Power-to-Mean Ratio (PMR), the Gamma distribution and the second order texture parameter entropy, respectively. The latter products contain information which is often useful during the manual interpretation of the images. The technique used to fuse the information in these products is based on a method c lled Multi Experts Multi Criteria Decision Making fuzzy a screening. The Multiple Experts in this case are the above four 'image' products. The two criteria used currently for making decisions are the Kolmogorov-Smirnov distribution matching and the statistical mean of different surface classes. The algorithm classifies an image into any number of predefined classes of sea ice and open water. The representative classes of these surface types are manually identified by the user. Further, as SAR signals from sea ice covered regions and open water are ambiguous, it was found that a minimum of 4 pre-identified surface classes (calm and turbulent water and sea ice with low and high backscatter values) are required to accurately classify an image. Best results are obtained when a total of 8 surface classes (2 each of sea ice and open water in the near range and a similar number in the far range of the SAR image) are used. The main advantage of using this image classification scheme is that, like neural networks, no prior knowledge is required of the statistical distribution of the different surface types. Furthermore, unlike the methods based on neural networks, no prior data sets are required to train the algorithm. All the information needed for image classification by the method is contained in the individual SAR images and associated products. Initial results illustrating the potential of this ice classification algorithm using the RADARSAT ScanSAR Wide data are presented

  4. Classification of Fuel Types Using Envisat Data

    NASA Astrophysics Data System (ADS)

    Wozniak, Edyta; Nasilowska, Sylwia

    2010-12-01

    Forest fires have an important impact on landscape structure and ecosystems biodiversity. Moreover, wild land fires have strong influence on forest planning and management. Furthermore, forest fires affect not only woodworking industry but also arable fields and inhabitants life too. A precise knowledge of the spatial distribution of fuels is necessary to predict, analyse and model fire behaviour. Modelling of fire spread is difficult and complicated because it depends on many factors. First of all, it depends on undergrowth and brushwood moisture and thickness, and tree species. There are many fuel types classification developed for regional environmental condition. The main drawback of implemented systems is utility for particular region of interest. That causes a need of permanent, consequent and more accurate researches in specific habitat not only in continental scale. In this paper a new system is proposed. It organizes fuels into three major groups (coniferous, deciduous wood and open) and four subcategories which describes a fuel structure (trees lower then 4m, trees higher than 4 m: without bushes; with low bushes lower them 2m; with high bushes higher then 2m). This classification is adapted into Polish lowlands environmental condition. The classification was carried out on the base of 120 training plots, which were determinate during a field experiment in north-eastern Poland. The plots discriminate homogeneous parts of forest which correspond to fuel classes. In the study we used the ENVISAT Alternating Polarization (HH/HV) image. The most popular classifiers were tried out and the maximum likelihood method resulted the most efficient. To map fuel types many methods are employed. The use of remote sensing systems gives the possibility of low- costs and time-consuming fuels mapping and updating. The employ of SAR systems permits mapping independently of weather condition. The microwave data has the potential to estimate fuel loads and map fuel types. The

  5. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  6. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  7. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  8. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  9. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  10. How Accurate are SuperCOSMOS Positions?

    NASA Astrophysics Data System (ADS)

    Schaefer, Adam; Hunstead, Richard; Johnston, Helen

    2014-02-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of 0.16 arcsec in right ascension and declination. While overall systematic offsets were < 0.1 arcsec in each hemisphere, both the systematics and scatter were greater in the north.

  11. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  12. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  13. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  14. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  15. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  16. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  17. Comparing prediction models for radiographic exposures

    NASA Astrophysics Data System (ADS)

    Ching, W.; Robinson, J.; McEntee, M. F.

    2015-03-01

    During radiographic exposures the milliampere-seconds (mAs), kilovoltage peak (kVp) and source-to-image distance can be adjusted for variations in patient thicknesses. Several exposure adjustment systems have been developed to assist with this selection. This study compares the accuracy of four systems to predict the required mAs for pelvic radiographs taken on a direct digital radiography system (DDR). Sixty radiographs were obtained by adjusting mAs to compensate for varying combinations of source-to-image distance (SID), kVp and patient thicknesses. The 25% rule, the DuPont Bit System and the DigiBit system were compared to determine which of these three most accurately predicted the mAs required for an increase in patient thickness. Similarly, the 15% rule, the DuPont Bit System and the DigiBit system were compared for an increase in kVp. The exposure index (EI) was used as an indication of exposure to the DDR. For each exposure combination the mAs was adjusted until an EI of 1500+/-2% was achieved. The 25% rule was the most accurate at predicting the mAs required for an increase in patient thickness, with 53% of the mAs predictions correct. The DigiBit system was the most accurate at predicting mAs needed for changes in kVp, with 33% of predictions correct. This study demonstrated that the 25% rule and DigiBit system were the most accurate predictors of mAs required for an increase in patient thickness and kVp respectively. The DigiBit system worked well in both scenarios as it is a single exposure adjustment system that considers a variety of exposure factors.

  18. Spectroscopic classification of supernova candidates

    NASA Astrophysics Data System (ADS)

    Hodgkin, S. T.; Hall, A.; Fraser, M.; Campbell, H.; Wyrzykowski, L.; Kostrzewa-Rutkowska, Z.; Pietro, N.

    2014-09-01

    We report the spectroscopic classification of four supernovae at the 2.5m Isaac Newton Telescope on La Palma, using the Intermediate Dispersion Spectrograph and the R300V grating (3500-8000 Ang; ~6 Ang resolution).

  19. CLASSIFICATION FRAMEWORK FOR COASTAL SYSTEMS

    EPA Science Inventory

    U.S. Environmental Protection Agency. Classification Framework for Coastal Systems. EPA/600/R-04/061. U.S. Environmental Protection Agency, National Health and Environmental Effects Research Laboratory, Atlantic Ecology Division, Narragansett, RI, Gulf Ecology Division, Gulf Bree...

  20. Classification Schemes: Developments and Survival.

    ERIC Educational Resources Information Center

    Pocock, Helen

    1997-01-01

    Discusses the growth, survival and future of library classification schemes. Concludes that to survive, a scheme must constantly update its policies, and readily adapt itself to accommodate growing disciplines and changing terminology. (AEF)

  1. Ocean acoustic hurricane classification.

    PubMed

    Wilson, Joshua D; Makris, Nicholas C

    2006-01-01

    Theoretical and empirical evidence are combined to show that underwater acoustic sensing techniques may be valuable for measuring the wind speed and determining the destructive power of a hurricane. This is done by first developing a model for the acoustic intensity and mutual intensity in an ocean waveguide due to a hurricane and then determining the relationship between local wind speed and underwater acoustic intensity. From this it is shown that it should be feasible to accurately measure the local wind speed and classify the destructive power of a hurricane if its eye wall passes directly over a single underwater acoustic sensor. The potential advantages and disadvantages of the proposed acoustic method are weighed against those of currently employed techniques. PMID:16454274

  2. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  3. A new classification of necrophilia.

    PubMed

    Aggrawal, Anil

    2009-08-01

    Necrophilia is a paraphilia whereby the perpetrator gets sexual pleasure in having sex with the dead. Most jurisdictions and nations have laws against this practice. Necrophilia exists in many variations, and some authors have attempted to classify necrophilia. However many related terms such as pseudonecrophilia continue being used differently by different authors, necessitating the introduction of a new classification system. The classification system suggested by the author attempts to put all different shades of necrophilia under 10 classes. PMID:19573840

  4. Successional stage of biological soil crusts: an accurate indicator of ecohydrological condition

    USGS Publications Warehouse

    Belnap, Jayne; Wilcox, Bradford P.; Van Scoyoc, Matthew V.; Phillips, Susan L.

    2013-01-01

    Biological soil crusts are a key component of many dryland ecosystems. Following disturbance, biological soil crusts will recover in stages. Recently, a simple classification of these stages has been developed, largely on the basis of external features of the crusts, which reflects their level of development (LOD). The classification system has six LOD classes, from low (1) to high (6). To determine whether the LOD of a crust is related to its ecohydrological function, we used rainfall simulation to evaluate differences in infiltration, runoff, and erosion among crusts in the various LODs, across a range of soil depths and with different wetting pre-treatments. We found large differences between the lowest and highest LODs, with runoff and erosion being greatest from the lowest LOD. Under dry antecedent conditions, about 50% of the water applied ran off the lowest LOD plots, whereas less than 10% ran off the plots of the two highest LODs. Similarly, sediment loss was 400 g m-2 from the lowest LOD and almost zero from the higher LODs. We scaled up the results from these simulations using the Rangeland Hydrology and Erosion Model. Modelling results indicate that erosion increases dramatically as slope length and gradient increase, especially beyond the threshold values of 10 m for slope length and 10% for slope gradient. Our findings confirm that the LOD classification is a quick, easy, nondestructive, and accurate index of hydrological condition and should be incorporated in field and modelling assessments of ecosystem health.

  5. Characterizing the toxicity of pulsed selenium exposure to Daphnia magna.

    PubMed

    Hoang, Tham C; Klaine, Stephen J

    2008-03-01

    The acute toxicity of selenium (Se) to aquatic biota has been studied extensively for decades. However, most studies have used a constant concentration aqueous exposure of Se to an invertebrate species. Since constant concentration exposure of toxicants to invertebrates is unusual in the environment, episodic exposure or pulsed exposures may represent true risk to aquatic biota more accurately. This research was designed to characterize the toxicity effects of pulsed Se exposure to Daphnia magna. Selenium exposure was varied during a 21-d chronic toxicity test to examine the effects of exposure concentration, duration, and recovery on survival, growth, and reproduction of D. magna. While D. magna did not die during exposures, latent mortality was observed. Latent mortality increased with exposure concentration and duration. Hence, standard toxicity test using continuous exposures would underestimate Se toxicity. Risk assessment method using results of continuous exposure would underestimate risk of Se to biota. For double-pulse exposures, cumulative mortality on day 21 was higher when time interval between pulses was shorter. With the same total exposure time, continuous exposure caused higher toxicity than did pulsed exposures due to recovery and tolerance development in D. magna after earlier pulses. Growth and reproduction of surviving D. magna were not affected by pulsed Se exposure due to recovery of D. magna after removal of the pulses. Based on these results, risk assessment for Se should take latent effects and the effect of recovery in to account. PMID:18190947

  6. Classification of the acanthocephala.

    PubMed

    Amin, Omar M

    2013-09-01

    In 1985, Amin presented a new system for the classification of the Acanthocephala in Crompton and Nickol's (1985) book 'Biology of the Acanthocephala' and recognized the concepts of Meyer (1931, 1932, 1933) and Van Cleave (1936, 1941, 1947, 1948, 1949, 1951, 1952). This system became the standard for the taxonomy of this group and remains so to date. Many changes have taken place and many new genera and species, as well as higher taxa, have been described since. An updated version of the 1985 scheme incorporating new concepts in molecular taxonomy, gene sequencing and phylogenetic studies is presented. The hierarchy has undergone a total face lift with Amin's (1987) addition of a new class, Polyacanthocephala (and a new order and family) to remove inconsistencies in the class Palaeacanthocephala. Amin and Ha (2008) added a third order (and a new family) to the Palaeacanthocephala, Heteramorphida, which combines features from the palaeacanthocephalan families Polymorphidae and Heteracanthocephalidae. Other families and subfamilies have been added but some have been eliminated, e.g. the three subfamilies of Arythmacanthidae: Arhythmacanthinae Yamaguti, 1935; Neoacanthocephaloidinae Golvan, 1960; and Paracanthocephaloidinae Golvan, 1969. Amin (1985) listed 22 families, 122 genera and 903 species (4, 4 and 14 families; 13, 28 and 81 genera; 167, 167 and 569 species in Archiacanthocephala, Eoacanthocephala and Palaeacanthocephala, respectively). The number of taxa listed in the present treatment is 26 families (18% increase), 157 genera (29%), and 1298 species (44%) (4, 4 and 16; 18, 29 and 106; 189, 255 and 845, in the same order), which also includes 1 family, 1 genus and 4 species in the class Polyacanthocephala Amin, 1987, and 3 genera and 5 species in the fossil family Zhijinitidae. PMID:24261131

  7. Classification of US hydropower dams by their modes of operation

    DOE PAGESBeta

    McManamay, Ryan A.; Oigbokie, II, Clement O.; Kao, Shih -Chieh; Bevelhimer, Mark S.

    2016-02-19

    A key challenge to understanding ecohydrologic responses to dam regulation is the absence of a universally transferable classification framework for how dams operate. In the present paper, we develop a classification system to organize the modes of operation (MOPs) for U.S. hydropower dams and powerplants. To determine the full diversity of MOPs, we mined federal documents, open-access data repositories, and internet sources. W then used CART classification trees to predict MOPs based on physical characteristics, regulation, and project generation. Finally, we evaluated how much variation MOPs explained in sub-daily discharge patterns for stream gages downstream of hydropower dams. After reviewingmore » information for 721 dams and 597 power plants, we developed a 2-tier hierarchical classification based on 1) the storage and control of flows to powerplants, and 2) the presence of a diversion around the natural stream bed. This resulted in nine tier-1 MOPs representing a continuum of operations from strictly peaking, to reregulating, to run-of-river, and two tier-2 MOPs, representing diversion and integral dam-powerhouse configurations. Although MOPs differed in physical characteristics and energy production, classification trees had low accuracies (<62%), which suggested accurate evaluations of MOPs may require individual attention. MOPs and dam storage explained 20% of the variation in downstream subdaily flow characteristics and showed consistent alterations in subdaily flow patterns from reference streams. Lastly, this standardized classification scheme is important for future research including estimating reservoir operations for large-scale hydrologic models and evaluating project economics, environmental impacts, and mitigation.« less

  8. CREST--classification resources for environmental sequence tags.

    PubMed

    Lanzén, Anders; Jørgensen, Steffen L; Huson, Daniel H; Gorfer, Markus; Grindhaug, Svenn Helge; Jonassen, Inge; Øvreås, Lise; Urich, Tim

    2012-01-01

    Sequencing of taxonomic or phylogenetic markers is becoming a fast and efficient method for studying environmental microbial communities. This has resulted in a steadily growing collection of marker sequences, most notably of the small-subunit (SSU) ribosomal RNA gene, and an increased understanding of microbial phylogeny, diversity and community composition patterns. However, to utilize these large datasets together with new sequencing technologies, a reliable and flexible system for taxonomic classification is critical. We developed CREST (Classification Resources for Environmental Sequence Tags), a set of resources and tools for generating and utilizing custom taxonomies and reference datasets for classification of environmental sequences. CREST uses an alignment-based classification method with the lowest common ancestor algorithm. It also uses explicit rank similarity criteria to reduce false positives and identify novel taxa. We implemented this method in a web server, a command line tool and the graphical user interfaced program MEGAN. Further, we provide the SSU rRNA reference database and taxonomy SilvaMod, derived from the publicly available SILVA SSURef, for classification of sequences from bacteria, archaea and eukaryotes. Using cross-validation and environmental datasets, we compared the performance of CREST and SilvaMod to the RDP Classifier. We also utilized Greengenes as a reference database, both with CREST and the RDP Classifier. These analyses indicate that CREST performs better than alignment-free methods with higher recall rate (sensitivity) as well as precision, and with the ability to accurately identify most sequences from novel taxa. Classification using SilvaMod performed better than with Greengenes, particularly when applied to environmental sequences. CREST is freely available under a GNU General Public License (v3) from http://apps.cbu.uib.no/crest and http://lcaclassifier.googlecode.com. PMID:23145153

  9. MEASURING DIETARY EXPOSURE OF YOUNG CHILDREN

    EPA Science Inventory

    Young children do not consume foods in a structured manner. Their foods contact surfaces (hands, floors, eating surfaces, etc.) that may be contaminated while they are eating them. Thus, dietary exposures of young children are difficult to accurately assess or measure. A recen...

  10. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Classification schedules. 2.85 Section 2.85 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Classification § 2.85 Classification schedules. (a) International classification system....

  11. 12 CFR 1777.20 - Capital classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the...

  12. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Classification levels. 1045.17 Section 1045.17 Energy... Restricted Data and Formerly Restricted Data Information § 1045.17 Classification levels. (a) Restricted Data. The Director of Classification shall assign one of the following classification levels to...

  13. 5 CFR 9901.221 - Classification requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of time following the effective date of the position classification action. For classification... of the reason for the reclassification, the right to appeal the classification decision, and the time... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Classification requirements....

  14. 28 CFR 345.20 - Position classification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Position classification. 345.20 Section... INDUSTRIES (FPI) INMATE WORK PROGRAMS Position Classification § 345.20 Position classification. (a) Inmate... the objectives and principles of pay classification as a part of the routine orientation of new...

  15. 12 CFR 1777.20 - Capital classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Capital classifications. 1777.20 Section 1777... DEVELOPMENT SAFETY AND SOUNDNESS PROMPT CORRECTIVE ACTION Capital Classifications and Orders Under Section 1366 of the 1992 Act § 1777.20 Capital classifications. (a) Capital classifications after the...

  16. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2700.22 Section 2700.22... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall issue classification guides pursuant to section 2-2 of E.O. 12065. These guides, which shall be used...

  17. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classification. 51.2284 Section 51.2284... Size classification. The following classifications are provided to describe the size of any lot... shall conform to the requirements of the specified classification as defined below: (a) Halves....

  18. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one...

  19. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification...

  20. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters...

  1. 49 CFR 8.17 - Classification challenges.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Classification challenges. 8.17 Section 8.17 Transportation Office of the Secretary of Transportation CLASSIFIED INFORMATION: CLASSIFICATION/DECLASSIFICATION/ACCESS Classification/Declassification of Information § 8.17 Classification challenges. (a)...

  2. 32 CFR 2400.9 - Classification requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification requirements. 2400.9 Section 2400... PROGRAM Original Classification § 2400.9 Classification requirements. (a) Information may be classified... below, and an official having original classification authority determines that its...

  3. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) International classification system. Section 6.1 of this chapter sets forth the international system of classification for goods and services, which applies for all statutory purposes to: (1... international classification pursuant to § 2.85(e)(3). (b) Prior United States classification system. Section...

  4. Classification of wheat: Badhwar profile similarity technique

    NASA Technical Reports Server (NTRS)

    Austin, W. W.

    1980-01-01

    The Badwar profile similarity classification technique used successfully for classification of corn was applied to spring wheat classifications. The software programs and the procedures used to generate full-scene classifications are presented, and numerical results of the acreage estimations are given.

  5. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Changing classifications. 2461.4 Section 2461.4 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND... Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  6. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Changing classifications. 2461.4 Section 2461.4 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND... Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  7. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Changing classifications. 2461.4 Section 2461.4 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND... Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  8. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Changing classifications. 2461.4 Section 2461.4 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND... Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  9. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall... direct derivative classification, shall identify the information to be protected in specific and uniform terms so that the information involved can be readily identified. The classification guides shall...

  10. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall... direct derivative classification, shall identify the information to be protected in specific and uniform terms so that the information involved can be readily identified. The classification guides shall...

  11. A fast SCOP fold classification system using content-based E-Predict algorithm

    PubMed Central

    Chi, Pin-Hao; Shyu, Chi-Ren; Xu, Dong

    2006-01-01

    Background Domain experts manually construct the Structural Classification of Protein (SCOP) database to categorize and compare protein structures. Even though using the SCOP database is believed to be more reliable than classification results from other methods, it is labor intensive. To mimic human classification processes, we develop an automatic SCOP fold classification system to assign possible known SCOP folds and recognize novel folds for newly-discovered proteins. Results With a sufficient amount of ground truth data, our system is able to assign the known folds for newly-discovered proteins in the latest SCOP v1.69 release with 92.17% accuracy. Our system also recognizes the novel folds with 89.27% accuracy using 10 fold cross validation. The average response time for proteins with 500 and 1409 amino acids to complete the classification process is 4.1 and 17.4 seconds, respectively. By comparison with several structural alignment algorithms, our approach outperforms previous methods on both the classification accuracy and efficiency. Conclusion In this paper, we build an advanced, non-parametric classifier to accelerate the manual classification processes of SCOP. With satisfactory ground truth data from the SCOP database, our approach identifies relevant domain knowledge and yields reasonably accurate classifications. Our system is publicly accessible at . PMID:16872501

  12. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  13. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  14. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  15. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  16. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  17. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  18. Development of advanced global cloud classification schemes

    NASA Astrophysics Data System (ADS)

    Konvalin, Chris; Logar, Antonette M.; Lloyd, David; Corwin, Edward; Penaloza, Manuel; Feind, Rand E.; Welch, Ronald M.

    1997-01-01

    The problem of producing polar cloud masks for satellite imagery is an important facet of the research on global warming. For the past three years, our research on this topic has produced a series of classifiers. The first classifier used traditional statistical techniques, and, although the performance was reasonably good, better accuracy and faster classification speeds were desired. Neural network classifiers provided an improvement in both classification speed and accuracy but a single monolithic network proved difficult to train and was computationally expensive. A decomposition of the neural network into a hierarchical structure provided significant reductions in training time and some increase in accuracy. While this technique produced excellent results, to optimize its performance a minimal feature set and a highly accurate and easily computed switching mechanism must be identified. This paper presents recent developments in these two areas. Landsat Thematic Mapper (TM) data from the arctic and antarctic was used to test the network. A minimal feature set, which defines the elements of the network input vector, is desirable for both improving accuracy and reducing computation. A smaller input vector will reduce the number of weights which must be updated during training and concomitantly reduce training and testing times. Small input vectors are also desirable because of the oft-cited 'curse of dimensionality' which states the higher the dimension of the problem to be solved, the more difficult it will be for the network to find an acceptable solution. However, it is also known that if a network has insufficient information, it will not be possible to form an appropriate decision surface. In that case, additional features, and additional dimensions, are required. Finding the proper balance can be difficult. Previously, trial and error was used to find a 'good' selection of features for classification. Features were added individually and those which had no

  19. Automatic image classification for the urinoculture screening.

    PubMed

    Andreini, Paolo; Bonechi, Simone; Bianchini, Monica; Garzelli, Andrea; Mecocci, Alessandro

    2016-03-01

    Urinary tract infections (UTIs) are considered to be the most common bacterial infection and, actually, it is estimated that about 150 million UTIs occur world wide yearly, giving rise to roughly $6 billion in healthcare expenditures and resulting in 100,000 hospitalizations. Nevertheless, it is difficult to carefully assess the incidence of UTIs, since an accurate diagnosis depends both on the presence of symptoms and on a positive urinoculture, whereas in most outpatient settings this diagnosis is made without an ad hoc analysis protocol. On the other hand, in the traditional urinoculture test, a sample of midstream urine is put onto a Petri dish, where a growth medium favors the proliferation of germ colonies. Then, the infection severity is evaluated by a visual inspection of a human expert, an error prone and lengthy process. In this paper, we propose a fully automated system for the urinoculture screening that can provide quick and easily traceable results for UTIs. Based on advanced image processing and machine learning tools, the infection type recognition, together with the estimation of the bacterial load, can be automatically carried out, yielding accurate diagnoses. The proposed AID (Automatic Infection Detector) system provides support during the whole analysis process: first, digital color images of Petri dishes are automatically captured, then specific preprocessing and spatial clustering algorithms are applied to isolate the colonies from the culture ground and, finally, an accurate classification of the infections and their severity evaluation are performed. The AID system speeds up the analysis, contributes to the standardization of the process, allows result repeatability, and reduces the costs. Moreover, the continuous transition between sterile and external environments (typical of the standard analysis procedure) is completely avoided. PMID:26780249

  20. Crop Type Classification Using Vegetation Indices of RapidEye Imagery

    NASA Astrophysics Data System (ADS)

    Ustuner, M.; Sanli, F. B.; Abdikan, S.; Esetlili, M. T.; Kurucu, Y.

    2014-09-01

    Cutting-edge remote sensing technology has a significant role for managing the natural resources as well as the any other applications about the earth observation. Crop monitoring is the one of these applications since remote sensing provides us accurate, up-to-date and cost-effective information about the crop types at the different temporal and spatial resolution. In this study, the potential use of three different vegetation indices of RapidEye imagery on crop type classification as well as the effect of each indices on classification accuracy were investigated. The Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Normalized Difference Red Edge Index (NDRE) are the three vegetation indices used in this study since all of these incorporated the near-infrared (NIR) band. RapidEye imagery is highly demanded and preferred for agricultural and forestry applications since it has red-edge and NIR bands. The study area is located in Aegean region of Turkey. Radial Basis Function (RBF) kernel was used here for the Support Vector Machines (SVMs) classification. Original bands of RapidEye imagery were excluded and classification was performed with only three vegetation indices. The contribution of each indices on image classification accuracy was also tested with single band classification. Highest classification accuracy of 87, 46 % was obtained using three vegetation indices. This obtained classification accuracy is higher than the classification accuracy of any dual-combination of these vegetation indices. Results demonstrate that NDRE has the highest contribution on classification accuracy compared to the other vegetation indices and the RapidEye imagery can get satisfactory results of classification accuracy without original bands.

  1. Environmental exposure measurement in cancer epidemiology

    PubMed Central

    2009-01-01

    Environmental exposures, used in the broadest sense of lifestyle, infections, radiation, natural and man-made chemicals and occupation, are a major cause of human cancer. However, the precise contribution of specific risk factors and their interaction, both with each other and with genotype, continues to be difficult to elucidate. This is partially due to limitations in accurately measuring exposure with the subsequent risk of misclassification. One of the primary challenges of molecular cancer epidemiology therefore is to improve exposure assessment. Progress has been made with biomarkers such as carcinogens and their metabolites, DNA and protein adducts and mutations measured in various tissues and body fluids. Nevertheless, much remains to be accomplished in order to establish aetiology and provide the evidence base for public health decisions. This review considers some of the principles behind the application of exposure biomarkers in cancer epidemiology. It also demonstrates how the same biomarkers can contribute both to establishing the biological plausibility of associations between exposure and disease and be valuable endpoints in intervention studies. The potential of new technologies such as transcriptomics, proteomics and metabonomics to provide a step change in environmental exposure assessment is discussed. An increasing recognition of the role of epigenetic changes in carcinogenesis presents a fresh challenge as alterations in DNA methylation, histone modification and microRNA in response to environmental exposures demand a new generation of exposure biomarker. The overall importance of this area of research is brought into sharp relief by the large prospective cohort studies (e.g. UK Biobank) which need accurate exposure measurement in order to shed light on the complex gene:environment interactions underlying common chronic disorders including cancer. It is suggested that a concerted effort is now required, with appropriate funding, to develop and

  2. HIV classification using coalescent theory

    SciTech Connect

    Zhang, Ming; Letiner, Thomas K; Korber, Bette T

    2008-01-01

    Algorithms for subtype classification and breakpoint detection of HIV-I sequences are based on a classification system of HIV-l. Hence, their quality highly depend on this system. Due to the history of creation of the current HIV-I nomenclature, the current one contains inconsistencies like: The phylogenetic distance between the subtype B and D is remarkably small compared with other pairs of subtypes. In fact, it is more like the distance of a pair of subsubtypes Robertson et al. (2000); Subtypes E and I do not exist any more since they were discovered to be composed of recombinants Robertson et al. (2000); It is currently discussed whether -- instead of CRF02 being a recombinant of subtype A and G -- subtype G should be designated as a circulating recombination form (CRF) nd CRF02 as a subtype Abecasis et al. (2007); There are 8 complete and over 400 partial HIV genomes in the LANL-database which belong neither to a subtype nor to a CRF (denoted by U). Moreover, the current classification system is somehow arbitrary like all complex classification systems that were created manually. To this end, it is desirable to deduce the classification system of HIV systematically by an algorithm. Of course, this problem is not restricted to HIV, but applies to all fast mutating and recombining viruses. Our work addresses the simpler subproblem to score classifications of given input sequences of some virus species (classification denotes a partition of the input sequences in several subtypes and CRFs). To this end, we reconstruct ancestral recombination graphs (ARG) of the input sequences under restrictions determined by the given classification. These restritions are imposed in order to ensure that the reconstructed ARGs do not contradict the classification under consideration. Then, we find the ARG with maximal probability by means of Markov Chain Monte Carlo methods. The probability of the most probable ARG is interpreted as a score for the classification. To our

  3. 7 CFR 27.80 - Fees; review classification, futures classification and supervision.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Fees; review classification, futures classification... COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of Classification and Micronaire § 27.80 Fees; review classification,...

  4. 7 CFR 27.80 - Fees; review classification, futures classification and supervision.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Fees; review classification, futures classification... COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Costs of Classification and Micronaire § 27.80 Fees; review classification,...

  5. Satellite image classification using convolutional learning

    NASA Astrophysics Data System (ADS)

    Nguyen, Thao; Han, Jiho; Park, Dong-Chul

    2013-10-01

    A satellite image classification method using Convolutional Neural Network (CNN) architecture is proposed in this paper. As a special case of deep learning, CNN classifies classes of images without any feature extraction step while other existing classification methods utilize rather complex feature extraction processes. Experiments on a set of satellite image data and the preliminary results show that the proposed classification method can be a promising alternative over existing feature extraction-based schemes in terms of classification accuracy and classification speed.

  6. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  7. Airborne lidar intensity calibration and application for land use classification

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wang, Cheng; Luo, She-Zhou; Zuo, Zheng-Li

    2014-11-01

    Airborne Light Detection and Ranging (LiDAR) is an active remote sensing technology which can acquire the topographic information efficiently. It can record the accurate 3D coordinates of the targets and also the signal intensity (the amplitude of backscattered echoes) which represents reflectance characteristics of targets. The intensity data has been used in land use classification, vegetation fractional cover and leaf area index (LAI) estimation. Apart from the reflectance characteristics of the targets, the intensity data can also be influenced by many other factors, such as flying height, incident angle, atmospheric attenuation, laser pulse power and laser beam width. It is therefore necessary to calibrate intensity values before further applications. In this study, we analyze the factors affecting LiDAR intensity based on radar range equation firstly, and then applying the intensity calibration method, which includes the sensor-to-target distance and incident angle, to the laser intensity data over the study area. Finally the raw LiDAR intensity and normalized intensity data are used for land use classification along with LiDAR elevation data respectively. The results show that the classification accuracy from the normalized intensity data is higher than that from raw LiDAR intensity data and also indicate that the calibration of LiDAR intensity data is necessary in the application of land use classification.

  8. A Hybrid Classification Scheme for Mining Multisource Geospatial Data

    SciTech Connect

    Vatsavai, Raju; Bhaduri, Budhendra L

    2007-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and atmospheric conditions present at the time of data acquisition. A second problem with statistical classifiers is the requirement of large number of accurate training samples, which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, it is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 15% improvement in classification accuracy over conventional classification schemes.

  9. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  10. Central Texas coastal classification maps - Aransas Pass to Mansfield Channel

    USGS Publications Warehouse

    Morton, Robert A.; Peterson, Russell L.

    2006-01-01

    The primary purpose of the USGS National Assessment of Coastal Change Project is to provide accurate representations of pre-storm ground conditions for areas that are designated high priority because they have dense populations or valuable resources that are at risk from storm waves. A secondary purpose of the project is to develop a geomorphic (land feature) coastal classification that, with only minor modification, can be applied to most coastal regions in the United States. A Coastal Classification Map describing local geomorphic features is the first step toward determining the hazard vulnerability of an area. The Coastal Classification Maps of the National Assessment of Coastal Change Project present ground conditions such as beach width, dune elevations, overwash potential, and density of development. In order to complete a hazard-vulnerability assessment, that information must be integrated with other information, such as prior storm impacts and beach stability. The Coastal Classification Maps provide much of the basic information for such an assessment and represent a critical component of a storm-impact forecasting capability.

  11. Impact of Information based Classification on Network Epidemics.

    PubMed

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  12. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  13. Impact of Information based Classification on Network Epidemics

    NASA Astrophysics Data System (ADS)

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-06-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results.

  14. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  15. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  16. Thermal Spore Exposure Vessels

    NASA Technical Reports Server (NTRS)

    Beaudet, Robert A.; Kempf, Michael; Kirschner, Larry

    2006-01-01

    Thermal spore exposure vessels (TSEVs) are laboratory containers designed for use in measuring rates of death or survival of microbial spores at elevated temperatures. A major consideration in the design of a TSEV is minimizing thermal mass in order to minimize heating and cooling times. This is necessary in order to minimize the number of microbes killed before and after exposure at the test temperature, so that the results of the test accurately reflect the effect of the test temperature. A typical prototype TSEV (see figure) includes a flat-bottomed stainless-steel cylinder 4 in. (10.16 cm) long, 0.5 in. (1.27 cm) in diameter, having a wall thickness of 0.010 plus or minus 0.002 in. (0.254 plus or minus 0.051 mm). Microbial spores are deposited in the bottom of the cylinder, then the top of the cylinder is closed with a sterile rubber stopper. Hypodermic needles are used to puncture the rubber stopper to evacuate the inside of the cylinder or to purge the inside of the cylinder with a gas. In a typical application, the inside of the cylinder is purged with dry nitrogen prior to a test. During a test, the lower portion of the cylinder is immersed in a silicone-oil bath that has been preheated to and maintained at the test temperature. Test temperatures up to 220 C have been used. Because the spores are in direct contact with the thin cylinder wall, they quickly become heated to the test temperature.

  17. Classification of cell death

    PubMed Central

    Kroemer, G; Galluzzi, L; Vandenabeele, P; Abrams, J; Alnemri, ES; Baehrecke, EH; Blagosklonny, MV; El-Deiry, WS; Golstein, P; Green, DR; Hengartner, M; Knight, RA; Kumar, S; Lipton, SA; Malorni, W; Nuñez, G; Peter, ME; Tschopp, J; Yuan, J; Piacentini, M; Zhivotovsky, B; Melino, G

    2009-01-01

    Different types of cell death are often defined by morphological criteria, without a clear reference to precise biochemical mechanisms. The Nomenclature Committee on Cell Death (NCCD) proposes unified criteria for the definition of cell death and of its different morphologies, while formulating several caveats against the misuse of words and concepts that slow down progress in the area of cell death research. Authors, reviewers and editors of scientific periodicals are invited to abandon expressions like ‘percentage apoptosis’ and to replace them with more accurate descriptions of the biochemical and cellular parameters that are actually measured. Moreover, at the present stage, it should be accepted that caspase-independent mechanisms can cooperate with (or substitute for) caspases in the execution of lethal signaling pathways and that ‘autophagic cell death’ is a type of cell death occurring together with (but not necessarily by) autophagic vacuolization. This study details the 2009 recommendations of the NCCD on the use of cell death-related terminology including ‘entosis’, ‘mitotic catastrophe’, ‘necrosis’, ‘necroptosis’ and ‘pyroptosis’. PMID:18846107

  18. LONGITUDINAL COHORT METHODS STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Exposure classification for occupational studies is relatively easy compared to predicting residential childhood exposures. Recent NHEXAS (Maryland) study articl...

  19. Featureless classification of light curves

    NASA Astrophysics Data System (ADS)

    Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.

    2015-08-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.

  20. Similarity-Based Classification in Partially Labeled Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Lü, Linyuan

    Two main difficulties in the problem of classification in partially labeled networks are the sparsity of the known labeled nodes and inconsistency of label information. To address these two difficulties, we propose a similarity-based method, where the basic assumption is that two nodes are more likely to be categorized into the same class if they are more similar. In this paper, we introduce ten similarity indices defined based on the network structure. Empirical results on the co-purchase network of political books show that the similarity-based method can, to some extent, overcome these two difficulties and give higher accurate classification than the relational neighbors method, especially when the labeled nodes are sparse. Furthermore, we find that when the information of known labeled nodes is sufficient, the indices considering only local information can perform as good as those global indices while having much lower computational complexity.

  1. Non-Destructive Classification Approaches for Equilbrated Ordinary Chondrites

    NASA Technical Reports Server (NTRS)

    Righter, K.; Harrington, R.; Schroeder, C.; Morris, R. V.

    2013-01-01

    Classification of meteorites is most effectively carried out by petrographic and mineralogic studies of thin sections, but a rapid and accurate classification technique for the many samples collected in dense collection areas (hot and cold deserts) is of great interest. Oil immersion techniques have been used to classify a large proportion of the US Antarctic meteorite collections since the mid-1980s [1]. This approach has allowed rapid characterization of thousands of samples over time, but nonetheless utilizes a piece of the sample that has been ground to grains or a powder. In order to compare a few non-destructive techniques with the standard approaches, we have characterized a group of chondrites from the Larkman Nunatak region using magnetic susceptibility and Moessbauer spectroscopy.

  2. New decision support tool for acute lymphoblastic leukemia classification

    NASA Astrophysics Data System (ADS)

    Madhukar, Monica; Agaian, Sos; Chronopoulos, Anthony T.

    2012-03-01

    In this paper, we build up a new decision support tool to improve treatment intensity choice in childhood ALL. The developed system includes different methods to accurately measure furthermore cell properties in microscope blood film images. The blood images are exposed to series of pre-processing steps which include color correlation, and contrast enhancement. By performing K-means clustering on the resultant images, the nuclei of the cells under consideration are obtained. Shape features and texture features are then extracted for classification. The system is further tested on the classification of spectra measured from the cell nuclei in blood samples in order to distinguish normal cells from those affected by Acute Lymphoblastic Leukemia. The results show that the proposed system robustly segments and classifies acute lymphoblastic leukemia based on complete microscopic blood images.

  3. Imaging lexicon for acute pancreatitis: 2012 Atlanta Classification revisited.

    PubMed

    Sureka, Binit; Bansal, Kalpana; Patidar, Yashwant; Arora, Ankur

    2016-02-01

    The original 1992 Atlanta Classification System for acute pancreatitis was revised in 2012 by the Atlanta Working Group, assisted by various national and international societies, through web-based consensus. This revised classification identifies two phases of acute pancreatitis: early and late. Acute pancreatitis can be either oedematous interstitial pancreatitis or necrotizing pancreatitis. Severity of the disease is categorized into three levels: mild, moderately severe and severe, depending upon organ failure and local/systemic complications. According to the type of pancreatitis, collections are further divided into acute peripancreatic fluid collection, pseudocyst, acute necrotic collection, and walled-off necrosis. Insight into the revised terminology is essential for accurate communication of imaging findings. In this review article, we will summarize the updated nomenclature and illustrate corresponding imaging findings using examples. PMID:26224684

  4. ASSESSING CHILDREN'S EXPOSURES TO PESTICIDES: AN IMPORTANT APPLICATION OF THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION MODEL (SHEDS)

    EPA Science Inventory

    Accurately quantifying human exposures and doses of various populations to environmental pollutants is critical for the Agency to assess and manage human health risks. For example, the Food Quality Protection Act of 1996 (FQPA) requires EPA to consider aggregate human exposure ...

  5. Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan; Deng, Weiling

    2010-01-01

    To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…

  6. Automated Star/Galaxy Classification for Digitized Poss-II

    NASA Astrophysics Data System (ADS)

    Weir, Nicholas; Fayyad, Usama M.; Djorgovski, S.

    1995-06-01

    We describe the automated object classification method implemented in the Sky Image Cataloging and Analysis Tool (SKICAT) and applied to the Digitized Second Palomar Observatory Sky Survey (DPOSS). This classification technique was designed with two purposes in mind: first, to classify objects in DPOSS to the faintest limits of the data; second, to fully generalize to future classification efforts, including classification of galaxies by morphology and improving the existing DPOSS star/galaxy classifiers once a larger volume of data are in hand. To optimize the identification of stars and galaxies in J and F band DPOSS scans, we determined a set of eight highly informative object attributes. In the eight-dimensional space defined by these attributes, we found like objects to be distributed relatively uniformly within and between plates. To infer the rules for distinguishing objects in this, but possibly any other, high-dimensional parameter space, we utilize a machine learning technique known as decision tree induction. Such induction algorithms are able to determine statistical classification rules simply by training on a set of example objects. We used high quality CCD images to determine accurate classifications for those examples in the training and set too faint for reliable classification by visual inspection of the plate. Our initial results obtained from a set of four DPOSS fields indicate that we achieve 90% completeness and 10% contamination in our galaxy catalogs down to a magnitude limit of ~19.6^m^ in r and 20.5^m^ in g, within F and J plates, respectively, or an equivalent B_J_ of nearly 21.0^m^. This represents a 0.5^m^-1.0^m^ improvement over results from previous digitized Schmidt plate surveys using comparable plate material. We have also begun applying methods of unsupervised classification to the DPOSS catalogs, allowing the data, rather than the scientist, to suggest the relevant and distinct classes within the sample. Our initial results from

  7. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  8. Confined Aerosol Jet in Fiber Classification and Dustiness Measurement

    NASA Astrophysics Data System (ADS)

    Dubey, Prahit

    The focus of this dissertation is the numerical analysis of confined aerosol jets used in fiber classification and dustiness measurement. Of relevance to the present work are two devices, namely, the Baron Fiber Classifier (BFC), and the Venturi Dustiness Tester (VDT). The BFC is a device used to length-separate fibers, important for toxicological research. The Flow Combination Section (FCS) of this device consists of an upstream region, where an aerosol of uncharged fibers is introduced in the form of an annular jet, in-between two sheath flows. Length-separation occurs by dielectrophoresis, downstream of the FCS in the Fiber Classification Section (FClS). In its standard operation, BFC processes only small quantities of fibers. In order to increase its throughput, higher aerosol flow rates must be considered. The goal of the present investigation is to understand the interaction of sheath and aerosol flows inside the FCS, and to identify possible limits to increasing aerosol flow rates using Computational Fluid Dynamics (CFD). Simulations involve solution of Navier-Stokes equations for axisymmetric and 3D models of the FCS for six different flow rates, and a pure aerodynamic treatment of the aerosol jet. The results show that the geometry of the FCS, and the two sheath flows, are successful in preventing the emergence of vortices in the FCS for aerosol-to-sheath flow inlet velocity ratios below ≈ 50. For larger aerosol-to-sheath flow inlet velocity ratios, two vortices are formed, one near the inner cylinder and one near the outer cylinder. The VDT is a novel device for measuring the dustiness of powders, relevant for dust management and controlling hazardous exposure. It uses just 10 mg of the test powder for its operation, during which the powder is aerosolized and turbulently dispersed (Re = 19,900) for 1.5s into a 5.7 liter chamber; the aerosol is then gently sampled (Re = 2050) for 240s through two filters located at the chamber top. Pump-driven suction at

  9. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  10. A Dynamic Classification Approach for Nursing

    PubMed Central

    Hardiker, Nicholas R.; Kim, Tae Youn; Coenen, Amy M.; Jansen, Kay R.

    2011-01-01

    Nursing has a long tradition of classification, stretching back at least 150 years. The introduction of computers into health care towards the end of the 20th Century helped to focus efforts, culminating in the development of a range of standardized classifications. Many of these classifications are still in use today and, while content is periodically updated, the underlying classification structures remain relatively static. In this paper an approach to classification that is relatively new to nursing is presented; an approach that uses formal Web Ontology Language definitions for classes, and computer-based reasoning on those classes, to determine automatically classification structures that more flexibly meet the needs of users. A new proposed classification structure for the International Classification for Nursing Practice is derived under the new approach to provide a new view on the next release of the classification and to contribute to broader quality improvement processes. PMID:22195109

  11. Learning semantic histopathological representation for basal cell carcinoma classification

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  12. The revised classification of eukaryotes

    PubMed Central

    Adl, Sina M.; Simpson, Alastair. G.; Lane, Christopher E.; Lukeš, Julius; Bass, David; Bowser, Samuel S.; Brown, Matt; Burki, Fabien; Dunthorn, Micah; Hampl, Vladimir; Heiss, Aaron; Hoppenrath, Mona; Lara, Enrique; leGall, Line; Lynn, Denis H.; McManus, Hilary; Mitchell, Edward A. D.; Mozley-Stanridge, Sharon E.; Parfrey, Laura Wegener; Pawlowski, Jan; Rueckert, Sonja; Shadwick, Lora; Schoch, Conrad; Smirnov, Alexey; Spiegel, Frederick W.

    2012-01-01

    This revision of the classification of eukaryotes, which updates that of Adl et al. (2005), retains an emphasis on the protists and incorporates changes since 2005 that have resolved nodes and branches in phylogenetic trees. Whereas the previous revision was successful in re-introducing name stability to the classification, this revision provides a classification for lineages that were then still unresolved. The supergroups have withstood phylogenetic hypothesis testing with some modifications, but despite some progress, problematic nodes at the base of the eukaryotic tree still remain to be statistically resolved. Looking forward, subsequent transformations to our understanding of the diversity of life will be from the discovery of novel lineages in previously under-sampled areas and from environmental genomic information. PMID:23020233

  13. Classification of road surface profiles

    SciTech Connect

    Rouillard, V.; Bruscella, B.; Sek, M.

    2000-02-01

    This paper introduces a universal classification methodology for discretely sampled sealed bituminous road profile data for the study of shock and vibrations related to the road transportation process. Data representative of a wide variety of Victorian (Australia) road profiles were used to develop a universal classification methodology with special attention to their non-Gaussian and nonstationary properties. This resulted in the design of computer software to automatically detect and extract transient events from the road spatial acceleration data as well as to identify segments of the constant RMS level enabling transients to be analyzed separately from the underlying road process. Nine universal classification parameters are introduced to describe road profile spatial acceleration based on the statistical characteristics of the transient amplitude and stationary RMS segments. Results from this study are aimed at the areas of road transport simulation as well as road surface characterization.

  14. Galaxy Classification without Feature Extraction

    NASA Astrophysics Data System (ADS)

    Polsterer, K. L.; Gieseke, F.; Kramer, O.

    2012-09-01

    The automatic classification of galaxies according to the different Hubble types is a widely studied problem in the field of astronomy. The complexity of this task led to projects like Galaxy Zoo which try to obtain labeled data based on visual inspection by humans. Many automatic classification frameworks are based on artificial neural networks (ANN) in combination with a feature extraction step in the pre-processing phase. These approaches rely on labeled catalogs for training the models. The small size of the typically used training sets, however, limits the generalization performance of the resulting models. In this work, we present a straightforward application of support vector machines (SVM) for this type of classification tasks. The conducted experiments indicate that using a sufficient number of labeled objects provided by the EFIGI catalog leads to high-quality models. In contrast to standard approaches no additional feature extraction is required.

  15. Classification systems for stalking behavior.

    PubMed

    Racine, Christopher; Billick, Stephen

    2014-01-01

    Stalking is a complex behavioral phenomenon that is unique in that it necessarily involves a prolonged dyadic relationship between both a perpetrator and a victim. Since criminalization of stalking behavior in the 1990s, different conceptual typologies have attempted to classify this behavior to assess risk and aid in management decisions. The authors reviewed the current literature regarding the most recent and accepted stalking classification systems. The three predominant stalker typologies currently in use include Zona's stalker-victim types, Mullen's stalker typology, and the RECON stalker typology. Of these, the RECON classification system alone was developed in an attempt to separate stalkers into groups based on previously known risk factors for behaviorally based phenomenon such as propensity for violence. Understanding and simplifying these classification systems may enhance the potential that new research will lead to evidence-based management and treatment strategies in the stalking situation. PMID:23980606

  16. Machine learning in soil classification.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained. PMID:16530382

  17. Classification of hyperlipidaemias and hyperlipoproteinaemias*

    PubMed Central

    1970-01-01

    Many studies of atherosclerosis have indicated hyperlipidaemia as a predisposing factor to vascular disease. The relationship holds even for mild degrees of hyperlipidaemia, a fact that underlines the importance of this category of disorders. Both primary and secondary hyperlipidaemias represent such a variety of abnormalities that an internationally acceptable provisional classification is highly desirable in order to facilitate communication between scientists with different backgrounds. The present memorandum presents such a classification; it briefly describes the criteria for diagnosis of the main types of hyperlipidaemia as well as the methods of their determination. Because lipoproteins offer more information than analysis of plasma lipids (most of the plasma lipids being bound to various proteins), the classification is based on lipoprotein analyses by electrophoresis and ultracentrifugation. Simpler methods, however, such as the observation of plasma and measurements of cholesterol and triglycerides, are used to the fullest possible extent in determining the lipoprotein patterns. PMID:4930042

  18. Methods of training set construction: Towards improving performance for automated mesozooplankton image classification systems

    NASA Astrophysics Data System (ADS)

    Chang, Chun-Yi; Ho, Pei-Chi; Sastri, Akash R.; Lee, Yu-Ching; Gong, Gwo-Ching; Hsieh, Chih-hao

    2012-03-01

    The correspondence between variation in the physico-chemical properties of the water column and the taxonomic composition of zooplankton communities represents an important indicator of long-term and broad-scale change in marine systems. Evaluating and relating compositional change to various forms of perturbation demand routine taxonomic identification methods that can be applied rapidly and accurately. Traditional identification by human experts is accurate but very time-consuming. The application of automated image classification systems for plankton communities has emerged as a potential resolution to this limitation. The objective of this study is to evaluate how specific aspects of training set construction for the ZooScan system influenced our ability to relate variation in zooplankton taxonomic composition to variation of hydrographic properties in the East China Sea. Specifically, we compared the relative utility of zooplankton classifiers trained with the following: (i) water mass-specific and global training sets; (ii) balanced versus imbalanced training sets. The classification performance (accuracy and precision) of water-mass specific classifiers tended to decline with environmental dissimilarity, suggesting water-mass specificity However, similar classification performance was also achieved by training our system with samples representing all hydrographic sub-regions (i.e. a global classifier). After examining category-specific accuracy, we found that equal performance arises because the accuracy was mainly determined by dominant taxa. This apparently high classification accuracy was at the expense of accurate classification of rare taxa. To explore the basis for such biased classification, we trained our global classifier with an equal amount of training data for each category (balanced training). We found that balanced training had higher accuracy at recognizing rare taxa but low accuracy at abundant taxa. The errors introduced in recognition still

  19. [Classification of periprosthetic shoulder fractures].

    PubMed

    Kirchhoff, C; Kirchhoff, S; Biberthaler, P

    2016-04-01

    The key targets in the treatment of periprosthetic humeral fractures (PHF) are the preservation of bone, successful bony consolidation and provision of a stable anchoring of the prosthesis with the major goal of restoring the shoulder-arm function. A substantial problem of periprosthetic shoulder fractures is the fact that treatment is determined not only by the fracture itself but also by the implanted prosthesis and its function. Consequently, the exact preoperative shoulder function and, in the case of an implanted anatomical prosthesis, the status and function of the rotator cuff need to be assessed in order to clarify the possibility of a secondarily occurring malfunction. Of equal importance in this context is the type of implanted prosthesis. The existing classification systems of Wright and Cofield, Campbell et al., Groh et al. and Worland et al. have several drawbacks from a shoulder surgeon's point of view, such as a missing reference to the great variability of the available prostheses and the lack of an evaluation of rotator cuff function. The presented 6‑stage classification for the evaluation of periprosthetic fractures of the shoulder can be considered just as simple or complex to understand as the classification of the working group for osteosynthesis problems (AO, Arbeitsgemeinschaft für Osteosynthesefragen), depending on the viewpoint. From our point of view the classification presented here encompasses the essential points of the existing classification systems and also covers the otherwise missing points, which should be considered in the assessment of such periprosthetic fractures. The classification presented here should provide helpful assistance in the daily routine to find the most convenient form of therapy. PMID:26992712

  20. A comparative study on manifold learning of hyperspectral data for land cover classification

    NASA Astrophysics Data System (ADS)

    Ozturk, Ceyda Nur; Bilgin, Gokhan

    2015-03-01

    This paper focuses on the land cover classification problem by employing a number of manifold learning algorithms in the feature extraction phase, then by running single and ensemble of classifiers in the modeling phase. Manifolds are learned on training samples selected randomly within available data, while the transformation of the remaining test samples is realized for linear and nonlinear methods via the learnt mappings and a radial-basis function neural network based interpolation method, respectively. The classification accuracies of the original data and the embedded manifolds are investigated with several classifiers. Experimental results on a 200-band hyperspectral image indicated that support vector machine was the best classifier for most of the methods, being nearly as accurate as the best classification rate of the original data. Furthermore, our modified version of random subspace classifier could even outperform the classification accuracy of the original data for local Fisher's discriminant analysis method despite of a considerable decrease in the extrinsic dimension.

  1. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  2. Classification of red wines using suitable markers coupled with multivariate statistic analysis.

    PubMed

    Geana, Elisabeta Irina; Popescu, Raluca; Costinel, Diana; Dinca, Oana Romina; Ionete, Roxana Elena; Stefanescu, Ioan; Artem, Victoria; Bala, Camelia

    2016-02-01

    Methodologies for chemometric classification of five authentic red wine varieties from Murfatlar wine center, Romania, young and aged are reported. The discriminant analysis based on several anthocyanins, organic acids, (13)C/(12)C, (18)O/(16)O and D/H isotopic ratios, (1)H and (13)C NMR fingerprints revealed a very satisfactory categorization of the wines, both in terms of variety and vintage, thus illustrating the validity of selected variables for wine authentication purposes. LDA applied to the combined data shows 85.7% classification of wines according to grape variety and 71.1% classification of wines according to vintage year, including a control wine set for each categorization, thus allowing an accurate interpretation of the data. Thereby, anthocyanins, certain anthocyanin ratios, oxalic, shikimic, lactic, citric and succinic acids, sugars like glucose, amino acids like histidine, leucine, isoleucine and alanine, and also 2,3-butanediol, methanol, glycerol and isotopic variables were significant for classification of wines. PMID:26304442

  3. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    PubMed

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time. PMID:25650073

  4. Semantic classification of business images

    NASA Astrophysics Data System (ADS)

    Erol, Berna; Hull, Jonathan J.

    2006-01-01

    Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.

  5. Deep learning for image classification

    NASA Astrophysics Data System (ADS)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  6. Automated Classification of Pathology Reports.

    PubMed

    Oleynik, Michel; Finger, Marcelo; Patrão, Diogo F C

    2015-01-01

    This work develops an automated classifier of pathology reports which infers the topography and the morphology classes of a tumor using codes from the International Classification of Diseases for Oncology (ICD-O). Data from 94,980 patients of the A.C. Camargo Cancer Center was used for training and validation of Naive Bayes classifiers, evaluated by the F1-score. Measures greater than 74% in the topographic group and 61% in the morphologic group are reported. Our work provides a successful baseline for future research for the classification of medical documents written in Portuguese and in other domains. PMID:26262339

  7. A classification of ecological boundaries

    USGS Publications Warehouse

    Strayer, D.L.; Power, M.E.; Fagan, W.F.; Pickett, S.T.A.; Belnap, J.

    2003-01-01

    Ecologists use the term boundary to refer to a wide range of real and conceptual structures. Because imprecise terminology may impede the search for general patterns and theories about ecological boundaries, we present a classification of the attributes of ecological boundaries to aid in communication and theory development. Ecological boundaries may differ in their origin and maintenance, their spatial structure, their function, and their temporal dynamics. A classification system based on these attributes should help ecologists determine whether boundaries are truly comparable. This system can be applied when comparing empirical studies, comparing theories, and testing theoretical predictions against empirical results.

  8. A knowledge-based approach of satellite image classification for urban wetland detection

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan

    It has been a technical challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This is mainly caused by inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and the spatial complexity of wetlands in human-transformed, heterogeneous urban landscapes. Knowledge-based classification, with great potential to overcome or reduce these technical impediments, has been applied to various image classifications focusing on urban land use/land cover and forest wetlands, but rarely to mapping the wetlands in urban landscapes. This study aims to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with the knowledge-based approach. The study area is the metropolitan area of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland - using the pixel-based supervised maximum likelihood classification method. The products of supervised classification are used as the comparative base maps. For our new classification approach, a knowledge base is developed to improve urban wetland detection, which includes a set of decision rules of identifying wetland cover in relation to its elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geostatistics. Using ERDAS Imagine software's knowledge classifier tool, the decision rules are applied to the base maps in order to identify wetlands that are not able to be detected based on the pixel-based classification. The results suggest that the knowledge-based image classification approach can enhance the urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  9. Classification of forest land attributes using multi-source remotely sensed data

    NASA Astrophysics Data System (ADS)

    Pippuri, Inka; Suvanto, Aki; Maltamo, Matti; Korhonen, Kari T.; Pitkänen, Juho; Packalen, Petteri

    2016-02-01

    The aim of the study was to (1) examine the classification of forest land using airborne laser scanning (ALS) data, satellite images and sample plots of the Finnish National Forest Inventory (NFI) as training data and to (2) identify best performing metrics for classifying forest land attributes. Six different schemes of forest land classification were studied: land use/land cover (LU/LC) classification using both national classes and FAO (Food and Agricultural Organization of the United Nations) classes, main type, site type, peat land type and drainage status. Special interest was to test different ALS-based surface metrics in classification of forest land attributes. Field data consisted of 828 NFI plots collected in 2008-2012 in southern Finland and remotely sensed data was from summer 2010. Multinomial logistic regression was used as the classification method. Classification of LU/LC classes were highly accurate (kappa-values 0.90 and 0.91) but also the classification of site type, peat land type and drainage status succeeded moderately well (kappa-values 0.51, 0.69 and 0.52). ALS-based surface metrics were found to be the most important predictor variables in classification of LU/LC class, main type and drainage status. In best classification models of forest site types both spectral metrics from satellite data and point cloud metrics from ALS were used. In turn, in the classification of peat land types ALS point cloud metrics played the most important role. Results indicated that the prediction of site type and forest land category could be incorporated into stand level forest management inventory system in Finland.

  10. Using Copula Distributions to Support More Accurate Imaging-Based Diagnostic Classifiers for Neuropsychiatric Disorders

    PubMed Central

    Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.

    2014-01-01

    Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging

  11. Comparison of two Classification methods (MLC and SVM) to extract land use and land cover in Johor Malaysia

    NASA Astrophysics Data System (ADS)

    Rokni Deilmai, B.; Ahmad, B. Bin; Zabihi, H.

    2014-06-01

    Mapping is essential for the analysis of the land use and land cover, which influence many environmental processes and properties. For the purpose of the creation of land cover maps, it is important to minimize error. These errors will propagate into later analyses based on these land cover maps. The reliability of land cover maps derived from remotely sensed data depends on an accurate classification. In this study, we have analyzed multispectral data using two different classifiers including Maximum Likelihood Classifier (MLC) and Support Vector Machine (SVM). To pursue this aim, Landsat Thematic Mapper data and identical field-based training sample datasets in Johor Malaysia used for each classification method, which results indicate in five land cover classes forest, oil palm, urban area, water, rubber. Classification results indicate that SVM was more accurate than MLC. With demonstrated capability to produce reliable cover results, the SVM methods should be especially useful for land cover classification.

  12. Supervised ensemble classification of Kepler variable stars

    NASA Astrophysics Data System (ADS)

    Bass, G.; Borne, K.

    2016-07-01

    Variable star analysis and classification is an important task in the understanding of stellar features and processes. While historically classifications have been done manually by highly skilled experts, the recent and rapid expansion in the quantity and quality of data has demanded new techniques, most notably automatic classification through supervised machine learning. We present an expansion of existing work on the field by analysing variable stars in the Kepler field using an ensemble approach, combining multiple characterization and classification techniques to produce improved classification rates. Classifications for each of the roughly 150 000 stars observed by Kepler are produced separating the stars into one of 14 variable star classes.

  13. Supervised Ensemble Classification of Kepler Variable Stars

    NASA Astrophysics Data System (ADS)

    Bass, G.; Borne, K.

    2016-04-01

    Variable star analysis and classification is an important task in the understanding of stellar features and processes. While historically classifications have been done manually by highly skilled experts, the recent and rapid expansion in the quantity and quality of data has demanded new techniques, most notably automatic classification through supervised machine learning. We present an expansion of existing work on the field by analyzing variable stars in the Kepler field using an ensemble approach, combining multiple characterization and classification techniques to produce improved classification rates. Classifications for each of the roughly 150,000 stars observed by Kepler are produced separating the stars into one of 14 variable star classes.

  14. Fusion of Hyperspectral and Vhr Multispectral Image Classifications in Urban Areas

    NASA Astrophysics Data System (ADS)

    Hervieu, Alexandre; Le Bris, Arnaud; Mallet, Clément

    2016-06-01

    An energetical approach is proposed for classification decision fusion in urban areas using multispectral and hyperspectral imagery at distinct spatial resolutions. Hyperspectral data provides a great ability to discriminate land-cover classes while multispectral data, usually at higher spatial resolution, makes possible a more accurate spatial delineation of the classes. Hence, the aim here is to achieve the most accurate classification maps by taking advantage of both data sources at the decision level: spectral properties of the hyperspectral data and the geometrical resolution of multispectral images. More specifically, the proposed method takes into account probability class membership maps in order to improve the classification fusion process. Such probability maps are available using standard classification techniques such as Random Forests or Support Vector Machines. Classification probability maps are integrated into an energy framework where minimization of a given energy leads to better classification maps. The energy is minimized using a graph-cut method called quadratic pseudo-boolean optimization (QPBO) with ?-expansion. A first model is proposed that gives satisfactory results in terms of classification results and visual interpretation. This model is compared to a standard Potts models adapted to the considered problem. Finally, the model is enhanced by integrating the spatial contrast observed in the data source of higher spatial resolution (i.e., the multispectral image). Obtained results using the proposed energetical decision fusion process are shown on two urban multispectral/hyperspectral datasets. 2-3% improvement is noticed with respect to a Potts formulation and 3-8% compared to a single hyperspectral-based classification.

  15. 32 CFR 2001.16 - Fundamental classification guidance review.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... a minimum, the fundamental classification guidance review shall focus on: (1) Evaluation of content... recent classification decisions that focuses on ensuring that classification decisions reflect the...

  16. Evaluation of AMOEBA: a spectral-spatial classification method

    USGS Publications Warehouse

    Jenson, Susan K.; Loveland, Thomas R.; Bryant, J.

    1982-01-01

    Muitispectral remotely sensed images have been treated as arbitrary multivariate spectral data for purposes of clustering and classifying. However, the spatial properties of image data can also be exploited. AMOEBA is a clustering and classification method that is based on a spatially derived model for image data. In an evaluation test, Landsat data were classified with both AMOEBA and a widely used spectral classifier. The test showed that irrigated crop types can be classified as accurately with the AMOEBA method as with the generally used spectral method ISOCLS; the AMOEBA method, however, requires less computer time.

  17. Traffic congestion classification using motion vector statistical features

    NASA Astrophysics Data System (ADS)

    Riaz, Amina; Khan, Shoab A.

    2013-12-01

    Due to the rapid increase in population, one of the major problems faced by the urban areas is traffic congestion. In this paper we propose a method for classifying highway traffic congestion using motion vector statistical properties. Motion vectors are estimated using pyramidal Kanada-Lucas-Tomasi (KLT) tracker algorithm. Then motion vector features are extracted and are used to classify the traffic patterns into three categories: light, medium and heavy. Classification using neural network, on publicly available dataset, shows an accuracy of 95.28%, with robustness to environmental conditions such as variable luminance. Our system provides a more accurate solution to the problem as compared to the systems previously proposed.

  18. Classification of Birds and Bats Using Flight Tracks

    SciTech Connect

    Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.

    2015-05-01

    Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant model for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.

  19. Influence of nuclei segmentation on breast cancer malignancy classification

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  20. A bootstrap method for assessing classification accuracy and confidence for agricultural land use mapping in Canada

    NASA Astrophysics Data System (ADS)

    Champagne, Catherine; McNairn, Heather; Daneshfar, Bahram; Shang, Jiali

    2014-06-01

    Land cover and land use classifications from remote sensing are increasingly becoming institutionalized framework data sets for monitoring environmental change. As such, the need for robust statements of classification accuracy is critical. This paper describes a method to estimate confidence in classification model accuracy using a bootstrap approach. Using this method, it was found that classification accuracy and confidence, while closely related, can be used in complementary ways to provide additional information on map accuracy and define groups of classes and to inform the future reference sampling strategies. Overall classification accuracy increases with an increase in the number of fields surveyed, where the width of classification confidence bounds decreases. Individual class accuracies and confidence were non-linearly related to the number of fields surveyed. Results indicate that some classes can be estimated accurately and confidently with fewer numbers of samples, whereas others require larger reference data sets to achieve satisfactory results. This approach is an improvement over other approaches for estimating class accuracy and confidence as it uses repetitive sampling to produce a more realistic estimate of the range in classification accuracy and confidence that can be obtained with different reference data inputs.

  1. Learning ECOC Code Matrix for Multiclass Classification with Application to Glaucoma Diagnosis.

    PubMed

    Bai, Xiaolong; Niwas, Swamidoss Issac; Lin, Weisi; Ju, Bing-Feng; Kwoh, Chee Keong; Wang, Lipo; Sng, Chelvin C; Aquino, Maria C; Chew, Paul T K

    2016-04-01

    Classification of different mechanisms of angle closure glaucoma (ACG) is important for medical diagnosis. Error-correcting output code (ECOC) is an effective approach for multiclass classification. In this study, we propose a new ensemble learning method based on ECOC with application to classification of four ACG mechanisms. The dichotomizers in ECOC are first optimized individually to increase their accuracy and diversity (or interdependence) which is beneficial to the ECOC framework. Specifically, the best feature set is determined for each possible dichotomizer and a wrapper approach is applied to evaluate the classification accuracy of each dichotomizer on the training dataset using cross-validation. The separability of the ECOC codes is maximized by selecting a set of competitive dichotomizers according to a new criterion, in which a regularization term is introduced in consideration of the binary classification performance of each selected dichotomizer. The proposed method is experimentally applied for classifying four ACG mechanisms. The eye images of 152 glaucoma patients are collected by using anterior segment optical coherence tomography (AS-OCT) and then segmented, from which 84 features are extracted. The weighted average classification accuracy of the proposed method is 87.65 % based on the results of leave-one-out cross-validation (LOOCV), which is much better than that of the other existing ECOC methods. The proposed method achieves accurate classification of four ACG mechanisms which is promising to be applied in diagnosis of glaucoma. PMID:26798075

  2. Medium wave exposure characterisation using exposure quotients.

    PubMed

    Paniagua, Jesús M; Rufo, Montaña; Jiménez, Antonio; Antolín, Alicia; Pinar, Iván

    2010-06-01

    One of the aspects considered in the International Commission on Non-Ionizing Radiation Protection guidelines is that, in situations of simultaneous exposure to fields of different frequencies, exposure quotients for thermal and electrical stimulation effects should be examined. The aim of the present work was to analyse the electromagnetic radiation levels and exposure quotients for exposure to multiple-frequency sources in the vicinity of medium wave radio broadcasting antennas. The measurements were made with a spectrum analyser and a monopole antenna. Kriging interpolation was used to prepare contour maps and to estimate the levels in the towns and villages of the zone. The results showed that the exposure quotient criterion based on electrical stimulation effects to be more stringent than those based on thermal effects or power density levels. Improvement of dosimetry evaluations requires the spectral components of the radiation to be quantified, followed by application of the criteria for exposure to multiple-frequency sources. PMID:20159912

  3. Classification of tea category using a portable electronic nose based on an odor imaging sensor array.

    PubMed

    Chen, Quansheng; Liu, Aiping; Zhao, Jiewen; Ouyang, Qin

    2013-10-01

    A developed portable electronic nose (E-nose) based on an odor imaging sensor array was successfully used for classification of three different fermentation degrees of tea (i.e., green tea, black tea, and Oolong tea). The odor imaging sensor array was fabricated by printing nine dyes, including porphyrin and metalloporphyrins, on the hydrophobic porous membrane. A color change profile for each sample was obtained by differentiating the image of sensor array before and after exposure to tea's volatile organic compounds (VOCs). Multivariate analysis was used for the classification of tea categories, and linear discriminant analysis (LDA) achieved 100% classification rate by leave-one-out cross-validation (LOOCV). This study demonstrates that the E-nose based on odor imaging sensor array has a high potential in the classification of tea category according to different fermentation degrees. PMID:23810847

  4. Classification of luminaire color using CCDs

    NASA Astrophysics Data System (ADS)

    McMenemy, Karen; Niblock, James

    2006-02-01

    The International Civil Aviation Organization (ICAO) is the regulatory body for Airports. ICAO standards dictate that luminaires used within an airport landing lighting pattern must have a color as defined within the 1931 color chart defined by the Commission Internationale De L'Eclairage (CIE). Currently, visual checks are used to ensure luminaires are operating at the right color within the pattern. That is, during an approach to an airport the pilot must visually check each luminaire within the landing pattern. These visual tests are combined with on the spot meter reading tests. This method is not accurate and it is impossible to assess the color of every luminaire. This paper presents a novel, automated method for assessing the color of luminaires using low cost single chip CCD video camera technology. Images taken from a camera placed within an aircraft cockpit during a normal approach to an airport are post-processed to determine the color of each luminaire in the pattern. In essence, the pixel coverage and total RGB level for each luminaire within the pattern must be extracted and tracked throughout the complete image sequence and an average RGB value used to predict the luminaire color. This prediction is based on a novel pixel model which was derived to determine the minimum pixel coverage required to accurately predict the color of an imaged luminaire. Analysis of how many pixels are required for color recognition and position within a CIE color chart is given and proved empirically. From the analysis it is found that, a minimum diameter of four pixels is required for color recognition of the major luminaires types within the airport landing pattern. The number of pixels required for classification of the color is then derived. This is important as the luminaries are far away when imaged and may cover only a few pixels since a large area must be viewed by the camera. The classification is then verified by laboratory based experiments with different

  5. New methodology for hazardous waste classification using fuzzy set theory Part I. Knowledge acquisition.

    PubMed

    Musee, N; Lorenzen, L; Aldrich, C

    2008-06-15

    In the literature on hazardous waste classification, the criteria used are mostly based on physical properties, such as quantity (weight), form (solids, liquid, aqueous or gaseous), the type of processes generating them, or a set of predefined lists. Such classification criteria are inherently inadequate to account for the influence of toxic and hazard characteristics of the constituent chemicals in the wastes, as well as their exposure potency in multimedia environments, terrestrial mammals and other biota. Second, none of these algorithms in the literature has explicitly presented waste classification by examining the contribution of individual constituent components of the composite wastes. In this two-part paper, we propose a new automated algorithm for waste classification that takes into account physicochemical and toxicity effects of the constituent chemicals to humans and ecosystems, in addition, to the exposure potency and waste quantity. In part I, available data on the physicochemical and toxicity properties of individual chemicals in humans and ecosystems, their exposure potency in environmental systems and the effect of waste quantity are described, because they fundamentally contribute to the final waste ranking. Knowledge acquisition in this study was accomplished through the extensive review of published and specialized literature to establish facts necessary for the development of fuzzy rule-bases. Owing to the uncertainty and imprecision of various forms of data (both quantitative and qualitative) essential for waste classification, and the complexity resulting from knowledge incompleteness, the use of fuzzy set theory for the aggregation and computation of waste classification ranking index is proposed. A computer-aided intelligent decision tool is described in part II of this paper and the functionality of the fuzzy waste classification algorithm is illustrated through nine worked examples. PMID:18082951

  6. Solar exposure of LDEF experiment trays

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gillis, J. R.

    1992-01-01

    Exposure to solar radiation is one of the primary causes of degradation of materials on spacecraft. Accurate knowledge of solar exposure is needed to evaluate the performance of materials carried on the Long Duration Exposure Facility (LDEF) during its nearly 6 year orbital flight. Presented here are tables and figures of calculated solar exposure for the experiment rows, longerons, and end bays of the spacecraft as functions of time in orbit. The data covers both direct solar and earth reflected radiation. Results are expressed in cumulative equivalent sun hours (CESH) or the hours of direct, zero incidence solar radiation that would cause the same irradiance of a surface. Space end bays received the most solar radiation, 14,000 CESH; earth end bays received the least, 4,500 CESH. Row locations received between 6,400 CESH and 11,200 CESH with rows facing either eastward or westward receiving the most radiation and rows facing northward or southward receiving the least.

  7. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    NASA Astrophysics Data System (ADS)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-07-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  8. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    NASA Astrophysics Data System (ADS)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-04-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong (2014, BA14), extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded image of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies/second/core with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multi-band observations; and joint inference of photometric redshifts and lensing tomography.

  9. Santa Ana Forecasting and Classification

    NASA Astrophysics Data System (ADS)

    Rolinski, T.; Eichhorn, D.; D'Agostino, B. J.; Vanderburg, S.; Means, J. D.

    2011-12-01

    Southern California experiences wildfires every year, but under certain circumstances these fires grow into extremely large and destructive fires, such as the Cedar Fire of 2003 and the Witch Fire of 2007. The Cedar Fire burned over 1100 km2 , destroyed more than 2200 homes and killed 15 people; the Witch fire burned more than 800 km2, destroyed more than 1000 homes and killed 2 people. Fires can quickly become too large and dangerous to fight if they are accompanied by a very strong "Santa Ana" condition, which is a foehn-like wind that may bring strong winds and very low humidities. However there is an entire range of specific weather conditions that fall into the broad category of Santa Anas, from cold and blustery to hot with very little wind. All types are characterized by clear skies and low humidity. Since the potential for destructive fire is dependent on the characteristics of Santa Anas, as well as the level of fuel moisture, there exists a need for further classification, such as is done with tropical cyclones and after-the-fact with tornadoes. We use surface data and fuel moisture combined with reanalysis to diagnose those conditions that result in Santa Anas with the greatest potential for destructive fires. We use this data to produce a new classification system for Santa Anas. This classification system should be useful for informing the relevant agencies for mitigation and response planning. In the future this same classification may be made available to the general public.

  10. CONNECTICUT GROUND WATER QUALITY CLASSIFICATIONS

    EPA Science Inventory

    This is a 1:24,000-scale datalayer of Ground Water Quality Classifications in Connecticut. It is a polygon Shapefile that includes polygons for GA, GAA, GAAs, GB, GC and other related ground water quality classes. Each polygon is assigned a ground water quality class, which is s...

  11. CONNECTICUT SURFACE WATER QUALITY CLASSIFICATIONS

    EPA Science Inventory

    This is a 1:24,000-scale datalayer of Surface Water Quality Classifications for Connecticut. It is comprised of two 0Shapefiles with line and polygon features. Both Shapefiles must be used together with the Hydrography datalayer. The polygon Shapefile includes surface water qual...

  12. A new classification of zoophilia.

    PubMed

    Aggrawal, Anil

    2011-02-01

    Zoophilia is a paraphilia whereby the perpetrator gets sexual pleasure in having sex with animals. Most jurisdictions and nations have laws against this practice. Zoophilia exists in many variations, and some authors have attempted to classify zoophilia previously. However unanimity does not exist among various classifications. In addition, sexual contact between humans and animals has been given several names such as zoophilia, zoophilism, bestiality, zooerasty and zoorasty. These terms continue to be used in different senses by different authors, creating some amount of confusion. A mathematical classification of zoophilia, which could group all shades of zoophilia under various numerical classes, could be a way to end this confusion. Recently a ten-tier classification of necrophilia has been proposed to bring an end to a similar confusion extant among various terms referring to necrophilia. It is our proposition that various shades of zoophilia exist on a similar continuum. Thus, each proposed class of zoophilia can be "mapped" to a similar class of necrophilia already proposed. This classification has an intuitive appeal, as it grades all shades of zoophilia from the least innocuous behavior to the most criminal. It is hoped that it would also bring an end to the existing confusion among several zoophilia related terms. In addition, since each proposed class of zoophilia can be exactly "mapped" to classes of another paraphilia (necrophilia), it may point to an "equivalence" among all paraphilias not yet explored fully. This area needs further exploration. PMID:21315301

  13. Nomenclature and classification, principles of

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nomenclature is critical for precise communications. Scientific names must be unique and attached to defined concepts. Classifications provide a means of encoding information into scientific nomenclature, so the use of scientific names provides the effective and efficient means for communicating abo...

  14. Gaming Simulation: A General Classification.

    ERIC Educational Resources Information Center

    Cecchini, Arnaldo; Frisenna, Adriana

    1987-01-01

    Reviews the problems of classifying gaming techniques and suggests a heuristic approach as one solution. Definitions of simulation, models, role, and game and play are discussed to help develop a classification based on a technique called gaming simulation. (Author/LRW)

  15. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  16. THE CLASSIFICATION OF HAZARDOUS OCCUPATIONS

    PubMed Central

    Hayhurst, E. R.

    1916-01-01

    In this paper Doctor Hayhurst describes the six classifications for hazardous occupations which were employed in a survey made in Ohio. He also explains, by text and diagram, the process of analyzing, upon a standard formula, the hazard of an individual case. PMID:18009453

  17. Functions in Biological Kind Classification

    ERIC Educational Resources Information Center

    Lombrozo, Tania; Rehder, Bob

    2012-01-01

    Biological traits that serve functions, such as a zebra's coloration (for camouflage) or a kangaroo's tail (for balance), seem to have a special role in conceptual representations for biological kinds. In five experiments, we investigate whether and why functional features are privileged in biological kind classification. Experiment 1…

  18. Computers vs. Humans in Galaxy Classification

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-04-01

    In this age of large astronomical surveys, one major scientific bottleneck is the analysis of enormous data sets. Traditionally, this task requires human input but could computers eventually take over? A pair of scientists explore this question by testing whether computers can classify galaxies as well as humans.Examples of disagreement: galaxies that Galaxy-Zoo humans classified as spirals with 95% agreement, but the computer algorithm classified as ellipticals with 70% certainty. Most are cases where the computer got it wrong but not all of them. [Adapted from Kuminski et al. 2016]Limits of Citizen ScienceGalaxy Zoo is an internet-based citizen science project that uses non-astronomer volunteers to classify galaxy images. This is an innovative way to provide more manpower, but its still only practical for limited catalog sizes. How do we handle the data from upcoming surveys like the Large Synoptic Survey Telescope (LSST), which will produce billions of galaxy images when it comes online?In a recent study by Evan Kuminski and Lior Shamir, two computer scientists at Lawrence Technological University in Michigan, a machine learning algorithm known as Wndchrm was used to classify a dataset of Sloan Digital Sky Survey (SDSS) galaxies into ellipticals and spirals. The authors goal is to determine whether their algorithm can classify galaxies as accurately as the human volunteers for Galaxy Zoo.Automatic ClassificationAfter training their classifier on a small set of spiral and elliptical galaxies, Kuminski and Shamir set it loose on a catalog of ~3 million SDSS galaxies. The classifier first computes a set of 2,885 numerical descriptors (like textures, edges, and shapes) for each galaxy image, and then uses these descriptors to categorize the galaxy as spiral or elliptical.Rate of agreement of the computer classification with human classification (for the Galaxy Zoo superclean subset) for different ranges of computed classification certainties. For certainties above

  19. Optimal Infomation-based Classification

    NASA Astrophysics Data System (ADS)

    Hyun, Baro

    Classification is the allocation of an object to an existing category among several based on uncertain measurements. Since information is used to quantify uncertainty, it is natural to consider classification and information as complementary subjects. This dissertation touches upon several topics that relate to the problem of classification, such as information, classification, and team classification. Motivated by the U.S. Air Force Intelligence, Surveillance, and Reconnaissance missions, we investigate the aforementioned topics for classifiers that follow two models: classifiers with workload-independent and workload-dependent performance. We adopt workload-independence and dependence as "first-order" models to capture the features of machines and humans, respectively. We first investigate the relationship between information in the sense of Shannon and classification performance, which is defined as the probability of misclassification. We show that while there is a predominant congruence between them, there are cases when such congruence is violated. We show the phenomenon for both workload-independent and workload-dependent classifiers and investigate the cause of such phenomena analytically. One way of making classification decisions is by setting a threshold on a measured quantity. For instance, if a measurement falls on one side of the threshold, the object that provided the measurement is classified as one type, otherwise, it is of another type. Exploiting thresholding, we formalize a classifier with dichotomous decisions (i.e., with two options, such as true or false) given a single variable measurement. We further extend the formalization to classifiers with trichotomy (i.e., with three options, such as true, false or unknown) and with multivariate measurements. When a team of classifiers is considered, issues on how to exploit redundant numbers of classifiers arise. We analyze these classifiers under different architectures, such as parallel or nested

  20. 10 CFR 1045.37 - Classification guides.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... classification guides with RD or FRD information topics. (d) Originators of classification guides with RD or FRD topics shall review such guides at least every five years and make revisions as necessary. (e)...

  1. 10 CFR 1045.37 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... classification guides with RD or FRD information topics. (d) Originators of classification guides with RD or FRD topics shall review such guides at least every five years and make revisions as necessary. (e)...

  2. 5 CFR 2500.3 - Original classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 2500.3 Administrative Personnel OFFICE OF ADMINISTRATION, EXECUTIVE OFFICE OF THE PRESIDENT INFORMATION SECURITY REGULATION § 2500.3 Original classification. No one in the Office of Administration has been granted authority for original classification of information....

  3. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Intellectual Property Organization, unless the International Bureau corrects the classification. Classes cannot... Intellectual Property Organization. (1) If international classification changes pursuant to the Nice Agreement... registered extensions of protection. (d) Section 66(a) applications and registered extensions of...

  4. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Intellectual Property Organization, unless the International Bureau corrects the classification. Classes cannot... Intellectual Property Organization. (1) If international classification changes pursuant to the Nice Agreement... registered extensions of protection. (d) Section 66(a) applications and registered extensions of...

  5. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Intellectual Property Organization, unless the International Bureau corrects the classification. Classes cannot... Intellectual Property Organization. (1) If international classification changes pursuant to the Nice Agreement... registered extensions of protection. (d) Section 66(a) applications and registered extensions of...

  6. An Expanded Classification of the Plant Kingdom.

    ERIC Educational Resources Information Center

    Rushton, B. S.

    1981-01-01

    Presents an expanded classification of the plant kingdom, emphasizing major evolutionary steps and differences in levels of complexity. Describes subdivisions and suggests that this classification, reflecting unity and diversity, may be logical, understandable, and useful to students. (JN)

  7. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.6 Derivative...'s classified Web site....

  8. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.6 Derivative...'s classified Web site....

  9. Screening and classification of ceramic powders

    NASA Technical Reports Server (NTRS)

    Miwa, S.

    1983-01-01

    A summary is given of the classification technology of ceramic powders. Advantages and disadvantages of the wet and dry screening and classification methods are discussed. Improvements of wind force screening devices are described.

  10. Asiago spectroscopic classification of AT2016bry

    NASA Astrophysics Data System (ADS)

    Ochner, P.; Tinella, V.; Righetti, G. L.; Belligoli, R.; Castellani, F.; Pastorello, A.; Cappellaro, E.; Benetti, S.; Tomasella, L.; Elias-Rosa, N.; Tartaglia, L.; Terreran, G.; Turatto, M.

    2016-05-01

    The Asiago Transient Classification Program (Tomasella et al. 2014, AN, 335, 841) reports the spectroscopic classification of AT2016bry, discovered by V. Tinella in UGC 11635, and preliminary photometric follow-up.

  11. Asiago spectroscopic classification of AT2016ajo

    NASA Astrophysics Data System (ADS)

    Terreran, G.; Pastorello, A.; Benetti, S.; Cappellaro, E.; Elias-Rosa, N.; Ochner, P.; Tartaglia, L.; Tomasella, L.; Turatto, M.

    2016-03-01

    The Asiago Transient Classification Program (Tomasella et al. 2014, AN, 335, 841) reports the spectroscopic classification of AT2016ajo, discovered by Y. Ding, W. Gao and X. Gao in an anonymous galaxy near UGC 11344.

  12. 5 CFR 1312.3 - Classification requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.3 Classification requirements. United States citizens must... security, certain official information must be subject to constraints on its dissemination or release....

  13. 5 CFR 1312.3 - Classification requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.3 Classification requirements. United States citizens must... security, certain official information must be subject to constraints on its dissemination or release....

  14. 5 CFR 2500.3 - Original classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 2500.3 Administrative Personnel OFFICE OF ADMINISTRATION, EXECUTIVE OFFICE OF THE PRESIDENT INFORMATION SECURITY REGULATION § 2500.3 Original classification. No one in the Office of Administration has been granted authority for original classification of information....

  15. Multiple signal classification for self-mixing flowmetry.

    PubMed

    Nikolić, Milan; Lim, Yah Leng; Bertling, Karl; Taimre, Thomas; Rakić, Aleksandar D

    2015-03-20

    For the first time to our knowledge, we apply the multiple signal classification (MUSIC) algorithm to signals obtained from a self-mixing flow sensor. We find that MUSIC accurately extracts the fluid velocity and exhibits a markedly better signal-to-noise ratio (SNR) than the commonly used fast Fourier transform (FFT) method. We compare the performance of the MUSIC and FFT methods for three decades of scatterer concentration and fluid velocities from 0.5 to 50 mm/s. MUSIC provided better linearity than the FFT and was able to accurately function over a wider range of algorithm parameters. MUSIC exhibited excellent linearity and SNR even at low scatterer concentration, at which the FFT's SNR decreased to impractical levels. This makes MUSIC a particularly attractive method for flow measurement systems with a low density of scatterers such as microfluidic and nanofluidic systems and blood flow in capillaries. PMID:25968500

  16. Functional Basis of Microorganism Classification

    PubMed Central

    Zhu, Chengsheng; Delmont, Tom O.; Vogel, Timothy M.; Bromberg, Yana

    2015-01-01

    Correctly identifying nearest “neighbors” of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned

  17. Discovery and Classification in Astronomy

    NASA Astrophysics Data System (ADS)

    Dick, Steven J.

    2012-01-01

    Three decades after Martin Harwit's pioneering Cosmic Discovery (1981), and following on the recent IAU Symposium "Accelerating the Rate of Astronomical Discovery,” we have revisited the problem of discovery in astronomy, emphasizing new classes of objects. 82 such classes have been identified and analyzed, including 22 in the realm of the planets, 36 in the realm of the stars, and 24 in the realm of the galaxies. We find an extended structure of discovery, consisting of detection, interpretation and understanding, each with its own nuances and a microstructure including conceptual, technological and social roles. This is true with a remarkable degree of consistency over the last 400 years of telescopic astronomy, ranging from Galileo's discovery of satellites, planetary rings and star clusters, to the discovery of quasars and pulsars. Telescopes have served as "engines of discovery” in several ways, ranging from telescope size and sensitivity (planetary nebulae and spiral galaxies), to specialized detectors (TNOs) and the opening of the electromagnetic spectrum for astronomy (pulsars, pulsar planets, and most active galaxies). A few classes (radiation belts, the solar wind and cosmic rays), were initially discovered without the telescope. Classification also plays an important role in discovery. While it might seem that classification marks the end of discovery, or a post-discovery phase, in fact it often marks the beginning, even a pre-discovery phase. Nowhere is this more clearly seen than in the classification of stellar spectra, long before dwarfs, giants and supergiants were known, or their evolutionary sequence recognized. Classification may also be part of a post-discovery phase, as in the MK system of stellar classification, constructed after the discovery of stellar luminosity classes. Some classes are declared rather than discovered, as in the case of gas and ice giant planets, and, infamously, Pluto as a dwarf planet.

  18. River Sinuosity Classification - The method

    NASA Astrophysics Data System (ADS)

    Petrovszki, J.; Székely, B.; Timár, G.

    2012-04-01

    We introduced a new evaluation method, the classification of multiple window-size based sinuosity spectrum. If the river is long enough for the analysis, the classification could be as useful, as the sinuosity spectrum, but sometimes it is more straightforward. Furthermore, for the classification, we did not need the main parameters of the river, e.g. the bankfull discharge. Each sinuosity calculation that was performed for a given window size, has been considered as one band (one channel) of a multichannel "image". Then, the sinuosity spectrums became multichannel images are of size 1 X N where N represents the length of the actual river in pixels. Using this multichannel input unsupervised ISOCLASS classification was carried out on these data, using ER Mapper software. The requested number of classes was set to 5. The results of the sinuosity calculations are scalars. Earlier, it was a subjective decision to divide the sinuosity values into the categories (low, medium-low, medium, medium-high, and high), while the new method provides integer numbers (1 to 5) itself. These numbers are calculated from the sinuosity values, but are not equal to them. Analysing the results of the classification, it is important to note that the method typically splits the river course into contiguous sections that belong to the same class. Boundaries of these classes can be considered as points of considerable change in the river course, because the method uses statistically relevant amount of data of the river course in a robust way to detect changes. Some specific classes or their boundaries seem to be correlated to tectonically active zones. The research is made in the frame of project OTKA-NK83400 (SourceSink Hungary). The European Union and the European Social Fund also have provided financial support to the project under the grant agreement no. TÁMOP 4.2.1./B-09/1/KMR-2010-0003.

  19. Indoor transformer stations and ELF magnetic field exposure: use of transformer structural characteristics to improve exposure assessment.

    PubMed

    Okokon, Enembe Oku; Roivainen, Päivi; Kheifets, Leeka; Mezei, Gabor; Juutilainen, Jukka

    2014-01-01

    Previous studies have shown that populations of multiapartment buildings with indoor transformer stations may serve as a basis for improved epidemiological studies on the relationship between childhood leukaemia and extremely-low-frequency (ELF) magnetic fields (MFs). This study investigated whether classification based on structural characteristics of the transformer stations would improve ELF MF exposure assessment. The data included MF measurements in apartments directly above transformer stations ("exposed" apartments) in 30 buildings in Finland, and reference apartments in the same buildings. Transformer structural characteristics (type and location of low-voltage conductors) were used to classify exposed apartments into high-exposure (HE) and intermediate-exposure (IE) categories. An exposure gradient was observed: both the time-average MF and time above a threshold (0.4 μT) were highest in the HE apartments and lowest in the reference apartments, showing a statistically significant trend. The differences between HE and IE apartments, however, were not statistically significant. A simulation exercise showed that the three-category classification did not perform better than a two-category classification (exposed and reference apartments) in detecting the existence of an increased risk. However, data on the structural characteristics of transformers is potentially useful for evaluating exposure-response relationship. PMID:24022671

  20. Accurate classification of 29 objects detected in the 39 month Palermo Swift/BAT hard X-ray catalogue

    NASA Astrophysics Data System (ADS)

    Parisi, P.; Masetti, N.; Jiménez-Bailón, E.; Chavushyan, V.; Palazzi, E.; Landi, R.; Malizia, A.; Bassani, L.; Bazzano, A.; Bird, A. J.; Charles, P. A.; Galaz, G.; Mason, E.; McBride, V. A.; Minniti, D.; Morelli, L.; Schiavone, F.; Ubertini, P.

    2012-09-01

    Through an optical campaign performed at four telescopes located in the northern and the southern hemispheres, plus archival data from two on-line sky surveys, we obtained optical spectroscopy for 29 counterparts of unclassified or poorly studied hard X-ray emitting objects detected with Swift /Burst Alert Telescope (BAT) and listed in the 39 month Palermo catalogue. All these objects also have observations taken with Swift /X-ray Telescope (XRT) or XMM-European Photon Imaging Camera (EPIC) which not only allow us to pinpoint their optical counterpart, but also study their X-ray spectral properties (column density, power law photon index, and F2-10 keV flux). We find that 28 sources in our sample are active galactic nuclei (AGNs); 7 are classified as type 1, while 21 are of type 2; the remaining object is a Galactic cataclysmic variable. Among our type 1 AGNs, we find 5 objects of intermediate Seyfert type (1.2-1.9) and one narrow-line Seyfert 1 galaxy; for 4 out of 7 sources, we are able to estimate the central black hole mass. Three of the type 2 AGNs of our sample display optical features typical of low-ionization nuclear emission-line regions (LINER) and one is a likely Compton thick AGN. All galaxies classified in this work are relatively nearby objects since their redshifts lie in the range 0.008-0.075; the only Galactic object found lies at an estimated distance of 90 pc. We also investigate the optical versus X-ray emission ratio of the galaxies of our sample to test the AGN unified model. For these galaxies, we also compare the X-ray absorption (caused by gas) with the optical reddening (caused by dust): we find that for most of our sources, specifically those of type 1.9-2.0 the former is higher than the latter confirming early results of Maiolino and collaborators; this is possibly due to the properties of dust in the circumnuclear obscuring torus of the AGN. Based on observations obtained from the following observatories: the Astronomical Observatory of Bologna in Loiano (Italy), ESO-La Silla Observatory (Chile) under programme 083.D-0110, Observatorio Astronómico Nacional (San Pedro Mártir, Mexico), and the South African Astronomical Observatory (South Africa).The spectra are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/545/A101