Science.gov

Sample records for accurate exposure classification

  1. TIME-INTEGRATED EXPOSURE MEASURES TO IMPROVE THE PREDICTIVE POWER OF EXPOSURE CLASSIFICATION FOR EPIDEMIOLOGIC STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Although long-term integrated exposure measurements are a critical component of exposure assessment, the ability to include these measurements into epidemiologic...

  2. Accurate statistical tests for smooth classification images.

    PubMed

    Chauvin, Alan; Worsley, Keith J; Schyns, Philippe G; Arguin, Martin; Gosselin, Frédéric

    2005-10-05

    Despite an obvious demand for a variety of statistical tests adapted to classification images, few have been proposed. We argue that two statistical tests based on random field theory (RFT) satisfy this need for smooth classification images. We illustrate these tests on classification images representative of the literature from F. Gosselin and P. G. Schyns (2001) and from A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett (2004). The necessary computations are performed using the Stat4Ci Matlab toolbox.

  3. Automatic classification and accurate size measurement of blank mask defects

    NASA Astrophysics Data System (ADS)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    complexity of defects encountered. The variety arises due to factors such as defect nature, size, shape and composition; and the optical phenomena occurring around the defect. This paper focuses on preliminary characterization results, in terms of classification and size estimation, obtained by Calibre MDPAutoClassify tool on a variety of mask blank defects. It primarily highlights the challenges faced in achieving the results with reference to the variety of defects observed on blank mask substrates and the underlying complexities which make accurate defect size measurement an important and challenging task.

  4. Accurate mobile malware detection and classification in the cloud.

    PubMed

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service. PMID:26543718

  5. Accurate mobile malware detection and classification in the cloud.

    PubMed

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  6. Accurate phylogenetic classification of DNA fragments based onsequence composition

    SciTech Connect

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  7. Accurate Classification of RNA Structures Using Topological Fingerprints

    PubMed Central

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  8. Accurate cortical tissue classification on MRI by modeling cortical folding patterns.

    PubMed

    Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea

    2015-09-01

    Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery.

  9. Accurate cortical tissue classification on MRI by modeling cortical folding patterns.

    PubMed

    Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea

    2015-09-01

    Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453

  10. Does more accurate exposure prediction necessarily improve health effect estimates?

    PubMed

    Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne

    2011-09-01

    A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.

  11. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  12. Change in BMI accurately predicted by social exposure to acquaintances.

    PubMed

    Oloritun, Rahman O; Ouarda, Taha B M J; Moturu, Sai; Madan, Anmol; Pentland, Alex Sandy; Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R(2). This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends.

  13. Accurate measurement of RF exposure from emerging wireless communication systems

    NASA Astrophysics Data System (ADS)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  14. Development of Classification and Story Building Data for Accurate Earthquake Damage Estimation

    NASA Astrophysics Data System (ADS)

    Sakai, Yuki; Fukukawa, Noriko; Arai, Kensuke

    We investigated the method of developing classification and story building data from census population database in order to estimate earthquake damage more accurately especially in the urban area presuming that there are correlation between numbers of non-wooden or high-rise buildings and the population. We formulated equations of estimating numbers of wooden houses, low-to-mid-rise(1-9 story) and high-rise(over 10 story) non-wooden buildings in the 1km mesh from night and daytime population database based on the building data we investigated and collected in the selected 20 meshs in Kanto area. We could accurately estimate the numbers of three classified buildings by the formulated equations, but in some special cases, such as the apartment block mesh, the estimated values are quite different from actual values.

  15. Determining suitable image resolutions for accurate supervised crop classification using remote sensing data

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Duveiller, Grégory

    2013-10-01

    Mapping the spatial distribution of crops has become a fundamental input for agricultural production monitoring using remote sensing. However, the multi-temporality that is often necessary to accurately identify crops and to monitor crop growth generally comes at the expense of coarser observation supports, and can lead to increasingly erroneous class allocations caused by mixed pixels. For a given application like crop classification, the spatial resolution requirement (e.g. in terms of a maximum tolerable pixel size) differs considerably over different landscapes. To analyse the spatial resolution requirements for accurate crop identification via image classification, this study builds upon and extends a conceptual framework established in a previous work1. This framework allows defining quantitatively the spatial resolution requirements for crop monitoring based on simulating how agricultural landscapes, and more specifically the fields covered by a crop of interest, are seen by instruments with increasingly coarser resolving power. The concept of crop specific pixel purity, defined as the degree of homogeneity of the signal encoded in a pixel with respect to the target crop type, is used to analyse how mixed the pixels can be (as they become coarser), without undermining their capacity to describe the desired surface properties. In this case, this framework has been steered towards answering the question: "What is the spatial resolution requirement for crop identification via supervised image classification, in particular minimum and coarsest acceptable pixel sizes, and how do these requirements change over different landscapes?" The framework is applied over four contrasting agro-ecological landscapes in Middle Asia. Inputs to the experiment were eight multi-temporal images from the RapidEye sensor, the simulated pixel sizes range from 6.5 m to 396.5 m. Constraining parameters for crop identification were defined by setting thresholds for classification

  16. Accurate, rapid taxonomic classification of fungal large-subunit rRNA genes.

    PubMed

    Liu, Kuan-Liang; Porras-Alfaro, Andrea; Kuske, Cheryl R; Eichorst, Stephanie A; Xie, Gary

    2012-03-01

    Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project.

  17. GPD: a graph pattern diffusion kernel for accurate graph classification with applications in cheminformatics.

    PubMed

    Smalter, Aaron; Huan, Jun Luke; Jia, Yi; Lushington, Gerald

    2010-01-01

    Graph data mining is an active research area. Graphs are general modeling tools to organize information from heterogeneous sources and have been applied in many scientific, engineering, and business fields. With the fast accumulation of graph data, building highly accurate predictive models for graph data emerges as a new challenge that has not been fully explored in the data mining community. In this paper, we demonstrate a novel technique called graph pattern diffusion (GPD) kernel. Our idea is to leverage existing frequent pattern discovery methods and to explore the application of kernel classifier (e.g., support vector machine) in building highly accurate graph classification. In our method, we first identify all frequent patterns from a graph database. We then map subgraphs to graphs in the graph database and use a process we call "pattern diffusion" to label nodes in the graphs. Finally, we designed a graph alignment algorithm to compute the inner product of two graphs. We have tested our algorithm using a number of chemical structure data. The experimental results demonstrate that our method is significantly better than competing methods such as those kernel functions based on paths, cycles, and subgraphs.

  18. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  19. A practical approach to accurate classification and staging of mycosis fungoides and Sézary syndrome.

    PubMed

    Thomas, Bjorn Rhys; Whittaker, Sean

    2012-12-01

    Cutaneous T-cell lymphomas are rare, distinct forms of non-Hodgkin's lymphomas. Of which, mycosis fungoides (MF) and Sézary syndrome (SS) are two of the most common forms. Careful, clear classification and staging of these lymphomas allow dermatologists to commence appropriate therapy and allow correct prognostic stratification for those patients affected. Of note, patients with more advanced disease will require multi-disciplinary input in determining specialist therapy. Literature has been summarized into an outline for classification/staging of MF and SS with the aim to provide clinical dermatologists with a concise review.

  20. Accurate multi-source forest species mapping using the multiple spectral-spatial classification approach

    NASA Astrophysics Data System (ADS)

    Stavrakoudis, Dimitris; Gitas, Ioannis; Karydas, Christos; Kolokoussis, Polychronis; Karathanassi, Vassilia

    2015-10-01

    This paper proposes an efficient methodology for combining multiple remotely sensed imagery, in order to increase the classification accuracy in complex forest species mapping tasks. The proposed scheme follows a decision fusion approach, whereby each image is first classified separately by means of a pixel-wise Fuzzy-Output Support Vector Machine (FO-SVM) classifier. Subsequently, the multiple results are fused according to the so-called multiple spectral- spatial classifier using the minimum spanning forest (MSSC-MSF) approach, which constitutes an effective post-regularization procedure for enhancing the result of a single pixel-based classification. For this purpose, the original MSSC-MSF has been extended in order to handle multiple classifications. In particular, the fuzzy outputs of the pixel-based classifiers are stacked and used to grow the MSF, whereas the markers are also determined considering both classifications. The proposed methodology has been tested on a challenging forest species mapping task in northern Greece, considering a multispectral (GeoEye) and a hyper-spectral (CASI) image. The pixel-wise classifications resulted in overall accuracies (OA) of 68.71% for the GeoEye and 77.95% for the CASI images, respectively. Both of them are characterized by high levels of speckle noise. Applying the proposed multi-source MSSC-MSF fusion, the OA climbs to 90.86%, which is attributed both to the ability of MSSC-MSF to tackle the salt-and-pepper effect, as well as the fact that the fusion approach exploits the relative advantages of both information sources.

  1. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis

    PubMed Central

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS

  2. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. PMID:26797472

  3. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance.

  4. Can parents of children with cancer accurately report their child's passive smoking exposure?

    PubMed Central

    Lensing, Shelly; Vukadinovich, Christopher M.; Hovell, Melbourne F.

    2009-01-01

    Introduction: This study examined whether children with cancer are exposed to measurable levels of passive smoke as assessed by parent report and laboratory measures of urine cotinine, an established biomarker of passive smoke exposure (PSE). It also determined whether parents/caretakers of young cancer patients can provide valid reports of their child's PSE during the child's treatment, by examining their association with urine cotinine measures. Methods: Participants included 124 parents of a child with cancer who lived with at least one adult smoker in the home and was exposed to tobacco smoke in the home and/or car. Eligible patients were younger than 18 years of age, were receiving active treatment for cancer at a large pediatric oncology institution, were at least 30 days postdiagnosis, and did not smoke. Parents provided information about smoking and their child's PSE by responding to a series of questionnaires. Patients provided urine samples for cotinine analyses. Results: Findings showed that parents provided valid short-term accounts of their child's PSE in the context of their child's cancer treatment. Parent reports of PSE showed moderately strong positive relationships with urine cotinine levels which were stronger for reports provided by parents who smoked compared with nonsmoking parents. Discussion: Parent reports of PSE were validated by positive and significant associations with urine cotinine. Reports provided in the context of possible verification by biomarker assays can provide sufficiently accurate estimates of PSE to serve as outcome measures for clinical research and clinical care in a pediatric cancer setting. PMID:19696308

  5. Exposure Classification and Temporal Variability in Urinary Bisphenol A Concentrations among Couples in Utah—The HOPE Study

    PubMed Central

    Cox, Kyley J.; Porucznik, Christina A.; Anderson, David J.; Brozek, Eric M.; Szczotka, Kathryn M.; Bailey, Nicole M.; Wilkins, Diana G.; Stanford, Joseph B.

    2015-01-01

    Background: Bisphenol A (BPA) is an endocrine disruptor and potential reproductive toxicant, but results of epidemiologic studies have been mixed and have been criticized for inadequate exposure assessment that often relies on a single measurement. Objective: Our goal was to describe the distribution of BPA concentrations in serial urinary specimens, assess temporal variability, and provide estimates of exposure classification when randomly selected samples are used to predict average exposure. Methods: We collected and analyzed 2,614 urine specimens from 83 Utah couples beginning in 2012. Female participants collected daily first-morning urine specimens during one to two menstrual cycles and male partners collected specimens during the woman’s fertile window for each cycle. We measured urinary BPA concentrations and calculated geometric means (GM) for each cycle, characterized the distribution of observed values and temporal variability using intraclass correlation coefficients, and performed surrogate category analyses to determine how well repeat samples could classify exposure. Results: The GM urine BPA concentration was 2.78 ng/mL among males and 2.44 ng/mL among females. BPA had a high degree of variability among both males (ICC = 0.18; 95% CI: 0.11, 0.26) and females (ICC = 0.11; 95% CI: 0.08, 0.16). Based on our more stringent surrogate category analysis, to reach proportions ≥ 0.80 for sensitivity, specificity, and positive predictive value (PPV) among females, 6 and 10 repeat samples for the high and low tertiles, respectively, were required. For the medium tertile, specificity reached 0.87 with 10 repeat samples, but even with 11 samples, sensitivity and PPV did not exceed 0.36. Five repeat samples, among males, yielded sensitivity and PPV values ≥ 0.75 for the high and low tertiles, but, similar to females, classification for the medium tertile was less accurate. Conclusion: Repeated urinary specimens are required to characterize typical BPA

  6. Response to “Accurate Risk-Based Chemical Screening Relies on Robust Exposure Estimates”

    EPA Science Inventory

    This is a correspondence (letter to the editor) with reference to comments by Rudel and Perovich on the article "Integration of Dosimetry, Exposure, and High-Throughput Screening Data in Chemical Toxicity Assessment". Article Reference: SI # 238882

  7. Application of ILO classification to a population without industrial exposure: findings to be differentiated from pneumoconiosis

    SciTech Connect

    Epstein, D.M.; Miller, W.T.; Bresnitz, E.A.; Levine, M.S.; Gefter, W.B.

    1984-01-01

    The International Labour Office (ILO) classification for radiographs of pneumoconiosis is a standard means of assessing the presence or absence of pneumoconiosis in workers exposed to mineral dusts. Using this classification, 200 admission chest radiographs were reviewed on hospitalized patients in an urban university medical center to determine the prevalence and possible significance of ''small opacities'' in a population without known industrial exposure. Seventy-one men and 129 women were screened with the mean age of 44.2 years (range, 15-84). Thirty-six (18%) of the 200 patients had small opacities at profusion level 1/0 or greater, and this constituted the ''positive radiographs'' group. Twenty-two patients (11%) with positive radiographs had no documentable dust exposure or other specific medical etiology that would explain the presence of their lung opacities. The high prevalence of small opacities in ''normal'' older individuals has important implications in the assessment of patients with suspected pneumoconiosis.

  8. Molecular-genetic analysis is essential for accurate classification of renal carcinoma resembling Xp11.2 translocation carcinoma.

    PubMed

    Hayes, Malcolm; Peckova, Kvetoslava; Martinek, Petr; Hora, Milan; Kalusova, Kristyna; Straka, Lubomir; Daum, Ondrej; Kokoskova, Bohuslava; Rotterova, Pavla; Pivovarčikova, Kristyna; Branzovsky, Jindrich; Dubova, Magdalena; Vesela, Pavla; Michal, Michal; Hes, Ondrej

    2015-03-01

    Xp11.2-translocation renal carcinoma (TRCC) is suspected when a renal carcinoma occurs in young patients, patients with a prior history of exposure to chemotherapy and when the neoplasm has morphological features suggestive of that entity. We retrieved 20 renal tumours (from 17,500 archival cases) of which morphology arose suspicion for TRCC. In nine cases, TFE3 translocation was confirmed by fluorescence in situ hybridisation analysis. In 9 of the remaining 11 TRCC-like cases (7 male, 4 female, aged 22-84 years), material was available for further study. The morphological spectrum was diverse. Six tumours showed a mixture of cells with eosinophilic or clear cytoplasm in tubular, acinar and papillary architecture. One case was high grade with epithelioid, spindle cell and sarcomatoid areas. Another showed tubular, solid, and papillary areas and foci containing spindle cells reminiscent of mucinous tubular and spindle cell carcinoma. The third showed dyscohesive nests of large epithelioid and histiocytoid cells in a background of dense lymphoplasmacytic infiltrate. By immunohistochemistry, keratin AE1/AE3 was diffusely positive in three tumours, while CK7 strongly stained one tumour and another focally and weakly. CD10 and Pax8 were expressed by eight, AMACR and vimentin by seven, CA-IX by four and TFE3 and cathepsin K by two tumours. Of the two TFE3-positive tumours, one showed polysomy of chromosome 7 and the other of 17; they were VHL normal and diagnosed as unclassifiable RCC. Of the seven TFE3-negative tumours, three showed polysomy of 7/17 and VHL abnormality and were diagnosed as combined clear cell RCC/papillary RCC. One TFE3-negative tumour with normal 7/17 but LOH 3p (VHL abnormality) was diagnosed as clear cell RCC. One TFE3-negative tumour with polysomy 7/17 but normal VHL was diagnosed as papillary RCC, and two with normal chromosomes 7/17 and VHL gene were considered unclassifiable. As morphological features and IHC are heterogeneous, TRCC-like renal

  9. Classification

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  10. Topology representing network enables highly accurate classification of protein images taken by cryo electron-microscope without masking.

    PubMed

    Ogura, Toshihiko; Iwasaki, Kenji; Sato, Chikara

    2003-09-01

    In single-particle analysis, a three-dimensional (3-D) structure of a protein is constructed using electron microscopy (EM). As these images are very noisy in general, the primary process of this 3-D reconstruction is the classification of images according to their Euler angles, the images in each classified group then being averaged to reduce the noise level. In our newly developed strategy of classification, we introduce a topology representing network (TRN) method. It is a modified method of a growing neural gas network (GNG). In this system, a network structure is automatically determined in response to the images input through a growing process. After learning without a masking procedure, the GNG creates clear averages of the inputs as unit coordinates in multi-dimensional space, which are then utilized for classification. In the process, connections are automatically created between highly related units and their positions are shifted where the inputs are distributed in multi-dimensional space. Consequently, several separated groups of connected units are formed. Although the interrelationship of units in this space are not easily understood, we succeeded in solving this problem by converting the unit positions into two-dimensional (2-D) space, and by further optimizing the unit positions with the simulated annealing (SA) method. In the optimized 2-D map, visualization of the connections of units provided rich information about clustering. As demonstrated here, this method is clearly superior to both the multi-variate statistical analysis (MSA) and the self-organizing map (SOM) as a classification method and provides a first reliable classification method which can be used without masking for very noisy images. PMID:14572474

  11. Analysis of continuous oxygen saturation data for accurate representation of retinal exposure to oxygen in the preterm infant.

    PubMed

    Cirelli, Josie; McGregor, Carolyn; Graydon, Brenda; James, Andrew

    2013-01-01

    Maintaining blood oxygen saturation within the intended target range for preterm infants receiving neonatal intensive care is challenging. Supplemental oxygen is believed to lead to increased risk of retinopathy of prematurity and hence managing the level of oxygen within this population is important within their care. Current quality improvement activities use coarse hourly spot readings to measure supplemental oxygen levels as associated with targeted ranges that vary based on gestational age. In this research we use Artemis, a real-time online healthcare analytics platform to ascertain if the collection of second by second data provides a better representation of retinal exposure to oxygen than an infrequent, intermittent spot reading. We show that Artemis is capable of producing more accurate information from the higher frequency data, as it includes all the episodic events in the activity of the hour, which provides a better understanding of oxygen fluctuation ranges which affect the physiological status of the infant.

  12. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  13. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    PubMed

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  14. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  15. Solar ultraviolet and the occupational radiant exposure of Queensland school teachers: A comparative study between teaching classifications and behavior patterns.

    PubMed

    Downs, Nathan J; Harrison, Simone L; Chavez, Daniel R Garzon; Parisi, Alfio V

    2016-05-01

    Classroom teachers located in Queensland, Australia are exposed to high levels of ambient solar ultraviolet as part of the occupational requirement to provide supervision of children during lunch and break times. We investigated the relationship between periods of outdoor occupational radiant exposure and available ambient solar radiation across different teaching classifications and schools relative to the daily occupational solar ultraviolet radiation (HICNIRP) protection standard of 30J/m(2). Self-reported daily sun exposure habits (n=480) and personal radiant exposures were monitored using calibrated polysulphone dosimeters (n=474) in 57 teaching staff from 6 different schools located in tropical north and southern Queensland. Daily radiant exposure patterns among teaching groups were compared to the ambient UV-Index. Personal sun exposures were stratified among teaching classifications, school location, school ownership (government vs non-government), and type (primary vs secondary). Median daily radiant exposures were 15J/m(2) and 5J/m(2)HICNIRP for schools located in northern and southern Queensland respectively. Of the 474 analyzed dosimeter-days, 23.0% were found to exceed the solar radiation protection standard, with the highest prevalence found among physical education teachers (57.4% dosimeter-days), followed by teacher aides (22.6% dosimeter-days) and classroom teachers (18.1% dosimeter-days). In Queensland, peak outdoor exposure times of teaching staff correspond with periods of extreme UV-Index. The daily occupational HICNIRP radiant exposure standard was exceeded in all schools and in all teaching classifications. PMID:26963432

  16. Effect of Item Selection on Item Exposure Rates within a Computerized Classification Test.

    ERIC Educational Resources Information Center

    Kalohn, John C.; Spray, Judith A.

    The purpose of many certification or licensure tests is to identify candidates who possess some level of minimum competence to practice their profession. In general, this type of test is referred to as classification testing. When this type of test is administered with a computer, the test is a computerized classification test (CCT). This paper…

  17. An Exploration of Hyperion Hyperspectral Imagery Combined with Different Supervised Classification Approaches Towards Obtaining More Accurate Land Use/Cover Cartography

    NASA Astrophysics Data System (ADS)

    Igityan, Nune

    2014-05-01

    Land use and land cover (LULC) constitutes a key variable of the Earth's system that has in general shown a close correlation with human activities and the physical environment. Describing the pattern and the spatial distribution of LULC is traditionally based on remote sensing data analysis and, evidently, one of the most commonly techniques applied has been image classification. The main objective of the present study has been to evaluate the combined use of Hyperion hyperspectral imagery with a range of supervised classification algorithms widely available today for discriminating LULC classes in a typical Mediterranean setting. Accuracy assessment of the derived thematic maps was based on the analysis of the classification confusion matrix statistics computed for each classification map, using for consistency the same set of validation points. Those were selected on the basis of photo-interpretation of high resolution aerial imagery and of panchromatic imagery available for the studied region at the time of the Hyperion overpass. Results indicated close classification accuracy between the different classifiers with the SVMs outperforming the other classification approaches. The higher classification accuracy by SVMs was attributed principally to the ability of this classifier to identify an optimal separating hyperplane for classes' separation which allows a low generalisation error, thus producing the best possible classes' separation. Although all classifiers produced close results, SVMs generally appeared most useful in describing the spatial distribution and the cover density of each land cover category. All in all, this study demonstrated that, provided that a Hyperion hyperspectral imagery can be made available at regular time intervals over a given region, when combined with SVMs classifiers, can potentially enable a wider approach in land use/cover mapping. This can be of particular importance, especially for regions like in the Mediterranean basin

  18. Encoding negative events under stress: high subjective arousal is related to accurate emotional memory despite misinformation exposure.

    PubMed

    Hoscheidt, Siobhan M; LaBar, Kevin S; Ryan, Lee; Jacobs, W Jake; Nadel, Lynn

    2014-07-01

    Stress at encoding affects memory processes, typically enhancing, or preserving, memory for emotional information. These effects have interesting implications for eyewitness accounts, which in real-world contexts typically involve encoding an aversive event under stressful conditions followed by potential exposure to misinformation. The present study investigated memory for a negative event encoded under stress and subsequent misinformation endorsement. Healthy young adults participated in a between-groups design with three experimental sessions conducted 48 h apart. Session one consisted of a psychosocial stress induction (or control task) followed by incidental encoding of a negative slideshow. During session two, participants were asked questions about the slideshow, during which a random subgroup was exposed to misinformation. Memory for the slideshow was tested during the third session. Assessment of memory accuracy across stress and no-stress groups revealed that stress induced just prior to encoding led to significantly better memory for the slideshow overall. The classic misinformation effect was also observed - participants exposed to misinformation were significantly more likely to endorse false information during memory testing. In the stress group, however, memory accuracy and misinformation effects were moderated by arousal experienced during encoding of the negative event. Misinformed-stress group participants who reported that the negative slideshow elicited high arousal during encoding were less likely to endorse misinformation for the most aversive phase of the story. Furthermore, these individuals showed better memory for components of the aversive slideshow phase that had been directly misinformed. Results from the current study provide evidence that stress and high subjective arousal elicited by a negative event act concomitantly during encoding to enhance emotional memory such that the most aversive aspects of the event are well remembered and

  19. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  20. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    maximum a posteriori (MAP) Bayesian analysis. Although mean bias was less when data were analysed using multiple linear regression, MAP Bayesian analysis is preferable because of its flexibility with respect to sample timing. Estimation of MPA AUC12 following EC-MPS administration using a limited sampling strategy with samples drawn within 3 h post-dose resulted in biased and imprecise results, likely due to a longer time to reach a peak MPA concentration (t max) with this formulation and more variable pharmacokinetic profiles. Inclusion of later sampling time points that capture enterohepatic recirculation and t max improved the predictive performance of strategies to predict EC-MPS exposure. Given the considerable pharmacokinetic variability associated with mycophenolate therapy, limited sampling strategies may potentially help in individualizing patient dosing. However, a compromise needs to be made between the predictive performance of the strategy and its clinical feasibility. An opportunity exists to combine research efforts globally to create an open-source database for MPA (AUC, concentrations and outcomes) that can be used and prospectively evaluated for AUC target-controlled dosing of MPA in autoimmune diseases.

  1. A genetic classification of sinkholes illustrated from evaporite paleokarst exposures in Spain

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Francisco; Guerrero, Jesús; Lucha, Pedro

    2008-01-01

    This contribution analyses the processes involved in the generation of sinkholes from the study of paleokarst features exposed in four Spanish Tertiary basins. Bedrock strata are subhorizontal evaporites, and in three of the basins they include halite and glauberite in the subsurface. Our studies suggest that formation of dolines in these areas results from a wider range of subsidence processes than those included in the most recently published sinkhole classifications; a new genetic classification of sinkholes applicable to both carbonate and evaporite karst areas is thus proposed. With the exception of solution dolines, it defines the main sinkhole types by use of two terms that refer to the material affected by downward gravitational movements (cover, bedrock or caprock) and the main type of process involved (collapse, suffosion or sagging). Sinkholes that result from the combination of several subsidence processes and affect more than one type of material are described by combinations of the different terms with the dominant material or process followed by the secondary one (e.g. bedrock sagging and collapse sinkhole). The mechanism of collapse includes any brittle gravitational deformation of cover and bedrock material, such as upward stoping of cavities by roof failure, development of well-defined failure planes and rock brecciation. Suffosion is the downward migration of cover deposits through dissolutional conduits accompanied with ductile settling. Sagging is the ductile flexure of sediments caused by differential corrosional lowering of the rockhead or interstratal karstification of the soluble bedrock. The paleokarsts we analysed suggest that the sagging mechanism (not included in previous genetic classifications) plays an important role in the generation of sinkholes in evaporites. Moreover, collapse processes are more significant in extent and rate in areas underlain by evaporites than in carbonate karst, primarily due to the greater solubility of the

  2. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.

  3. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  4. EXPOSURE ASSESSMENT METHODS DEVELOPMENT PILOTS FOR THE NATIONAL CHILDREN'S STUDY

    EPA Science Inventory

    Accurate exposure classification tools are needed to link exposure with health effects. EPA began methods development pilot studies in 2000 to address general questions about exposures and outcome measures. Selected pilot studies are highlighted in this poster. The “Literature Re...

  5. Violent crime exposure classification and adverse birth outcomes: a geographically-defined cohort study

    PubMed Central

    Messer, Lynne C; Kaufman, Jay S; Dole, Nancy; Herring, Amy; Laraia, Barbara A

    2006-01-01

    Background Area-level socioeconomic disparities have long been associated with adverse pregnancy outcomes. Crime is an important element of the neighborhood environment inadequately investigated in the reproductive and public health literature. When crime has been used in research, it has been variably defined, resulting in non-comparable associations across studies. Methods Using geocoded linked birth record, crime and census data in multilevel models, this paper explored the relevance of four spatial violent crime exposures: two proximal violent crime categorizations (count of violent crime within a one-half mile radius of maternal residence and distance from maternal residence to nearest violent crime) and two area-level crime categorizations (count of violent crimes within a block group and block group rate of violent crimes) for adverse birth events among women in living in the city of Raleigh NC crime report area in 1999–2001. Models were adjusted for maternal age and education and area-level deprivation. Results In black and white non-Hispanic race-stratified models, crime characterized as a proximal exposure was not able to distinguish between women experiencing adverse and women experiencing normal birth outcomes. Violent crime characterized as a neighborhood attribute was positively associated with preterm birth and low birth weight among non-Hispanic white and black women. No statistically significant interaction between area-deprivation and violent crime category was observed. Conclusion Crime is variably categorized in the literature, with little rationale provided for crime type or categorization employed. This research represents the first time multiple crime categorizations have been directly compared in association with health outcomes. Finding an effect of area-level violent crime suggests crime may best be characterized as a neighborhood attribute with important implication for adverse birth outcomes. PMID:16707017

  6. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  7. Photometric brown-dwarf classification. II. A homogeneous sample of 1361 L and T dwarfs brighter than J = 17.5 with accurate spectral types

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.

    2016-05-01

    We present a homogeneous sample of 1361 L and T dwarfs brighter than J = 17.5 (of which 998 are new), from an effective area of 3070 deg2, classified by the photo-type method to an accuracy of one spectral sub-type using izYJHKW1W2 photometry from SDSS+UKIDSS+WISE. Other than a small bias in the early L types, the sample is shown to be effectively complete to the magnitude limit, for all spectral types L0 to T8. The nature of the bias is an incompleteness estimated at 3% because peculiar blue L dwarfs of type L4 and earlier are classified late M. There is a corresponding overcompleteness because peculiar red (likely young) late M dwarfs are classified early L. Contamination of the sample is confirmed to be small: so far spectroscopy has been obtained for 19 sources in the catalogue and all are confirmed to be ultracool dwarfs. We provide coordinates and izYJHKW1W2 photometry of all sources. We identify an apparent discontinuity, Δm ~ 0.4 mag, in the Y - K colour between spectral types L7 and L8. We present near-infrared spectra of nine sources identified by photo-type as peculiar, including a new low-gravity source ULAS J005505.68+013436.0, with spectroscopic classification L2γ. We provide revised izYJHKW1W2 template colours for late M dwarfs, types M7 to M9. The catalogue is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/589/A49

  8. Rapid and accurate taxonomic classification of insect (class Insecta) cytochrome c oxidase subunit 1 (COI) DNA barcode sequences using a naïve Bayesian classifier

    PubMed Central

    Porter, Teresita M; Gibson, Joel F; Shokralla, Shadi; Baird, Donald J; Golding, G Brian; Hajibabaei, Mehrdad

    2014-01-01

    Current methods to identify unknown insect (class Insecta) cytochrome c oxidase (COI barcode) sequences often rely on thresholds of distances that can be difficult to define, sequence similarity cut-offs, or monophyly. Some of the most commonly used metagenomic classification methods do not provide a measure of confidence for the taxonomic assignments they provide. The aim of this study was to use a naïve Bayesian classifier (Wang et al. Applied and Environmental Microbiology, 2007; 73: 5261) to automate taxonomic assignments for large batches of insect COI sequences such as data obtained from high-throughput environmental sequencing. This method provides rank-flexible taxonomic assignments with an associated bootstrap support value, and it is faster than the blast-based methods commonly used in environmental sequence surveys. We have developed and rigorously tested the performance of three different training sets using leave-one-out cross-validation, two field data sets, and targeted testing of Lepidoptera, Diptera and Mantodea sequences obtained from the Barcode of Life Data system. We found that type I error rates, incorrect taxonomic assignments with a high bootstrap support, were already relatively low but could be lowered further by ensuring that all query taxa are actually present in the reference database. Choosing bootstrap support cut-offs according to query length and summarizing taxonomic assignments to more inclusive ranks can also help to reduce error while retaining the maximum number of assignments. Additionally, we highlight gaps in the taxonomic and geographic representation of insects in public sequence databases that will require further work by taxonomists to improve the quality of assignments generated using any method.

  9. Improvement of the Cramer classification for oral exposure using the database TTC RepDose - A strategy description

    EPA Science Inventory

    The present report describes a strategy to refine the current Cramer classification of the TTC concept using a broad database (DB) termed TTC RepDose. Cramer classes 1-3 overlap to some extent, indicating a need for a better separation of structural classes likely to be toxic, mo...

  10. Malingering in Toxic Exposure. Classification Accuracy of Reliable Digit Span and WAIS-III Digit Span Scaled Scores

    ERIC Educational Resources Information Center

    Greve, Kevin W.; Springer, Steven; Bianchini, Kevin J.; Black, F. William; Heinly, Matthew T.; Love, Jeffrey M.; Swift, Douglas A.; Ciota, Megan A.

    2007-01-01

    This study examined the sensitivity and false-positive error rate of reliable digit span (RDS) and the WAIS-III Digit Span (DS) scaled score in persons alleging toxic exposure and determined whether error rates differed from published rates in traumatic brain injury (TBI) and chronic pain (CP). Data were obtained from the files of 123 persons…

  11. Quasi-induced exposure: methodology and insight.

    PubMed

    Stamatiadis, N; Deacon, J A

    1997-01-01

    Even though the numerator in accident rates can be accurately determined nowadays, the denominator of these rates is an item of discussion and debate within the highway safety community. A critical examination of an induced exposure technique, based on the non-responsible driver/vehicle of a two-vehicle accident (quasi-induced exposure), is presented here. Differences in exposure for a series of accident location and time combinations are investigated, the assumption of similarities between drivers of single-vehicle accidents and the responsible driver of multiple-vehicle accidents is refuted, and the use of the non-responsible driver as a measure of exposure is tested using vehicle classification data. The results of the analyses reveal the following: (1) accident exposure is different for different location and time combinations: (2) induced exposure estimates provide an accurate reflection of exposure to multiple-vehicle accidents; (3) induced exposure estimates are acceptable surrogates for vehicle miles of travel when estimates are made for conditions during which the mix of road users is fairly constant; and (4) the propensity for involvement in single-vehicle accidents is generally different than that in multiple-vehicle accidents for a given class of road users. We concluded that the quasi-induced exposure is a powerful technique for measuring relative exposure of drivers or vehicles when real exposure data are missing. PMID:9110039

  12. Update on diabetes classification.

    PubMed

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal.

  13. The Short Time Exposure (STE) test for predicting eye irritation potential: intra-laboratory reproducibility and correspondence to globally harmonized system (GHS) and EU eye irritation classification for 109 chemicals.

    PubMed

    Takahashi, Yutaka; Hayashi, Kazuhiko; Abo, Takayuki; Koike, Mirei; Sakaguchi, Hitoshi; Nishiyama, Naohiro

    2011-10-01

    Short Time Exposure (STE) test is an easy in vitro eye irritation test that assesses cytotoxicity in SIRC cells (rabbit corneal cell line) following a 5 min dose treatment. To assess intra-laboratory reproducibility, medium control, three vehicles (saline, saline containing 5% (w/w) dimethyl sulfoxide, and mineral oil) and three standard chemicals (sodium lauryl sulfate, calcium thioglycolate, and Tween 80) were evaluated. Assessments were repeated 30 times for vehicles and 18 times for standard chemicals; resulting in almost the same cell viability and a low coefficient of variation value. In addition, the STE eye irritation rankings of three standard chemicals, as calculated on the cell viabilities in 5% and 0.05% solutions were in agreement in all tests. Based on these results, high intra-laboratory reproducibility was confirmed. In addition, the irritation category (irritant and non-irritant) was evaluated for 109 chemicals with STE test, globally harmonized system (GHS) classification, and European Union (EU) classification. The results of the evaluation found the STE classification to have an accuracy with GHS classification of 87% and with EU classification of 83%, which confirmed the excellent correspondence. The correspondence of STE rankings (1, 2, and 3) based on the prediction model by STE test with the eye irritation rankings by GHS (non-irritant, categories 2 and 1) and EU (non-irritant, R36, and R41) was 76% and 71%, respectively. Based on the above results, STE test was considered to be a promising alternative method for assessing eye irritation that has high intra-laboratory reproducibility as well as an excellent predictability of eye irritation.

  14. Temporal context in floristic classification

    NASA Astrophysics Data System (ADS)

    Fitzgerald, R. W.; Lees, B. G.

    1996-11-01

    Multi-temporal remote sensing data present a number of significant problems for the statistical and spatial competence of a classifier. Ideally, a classifier of multi-temporal data should be temporally invariant. It must have the capacity to account for the variations in season, growth cycle, radiometric, and atmospheric conditions at any point in time when classifying the land cover. This paper tests two methods of creating a temporally invariant classifier based on the pattern recognition capabilities of a neural network. A suite of twelve multi-temporal datasets spread over 5 yr along with a comprehensive mix of environmental variables are fused into floristic classification images by the neural network. Uncertainties in the classifications are addressed explicitly with a confidence mask generated from the fuzzy membership value's output by the neural network. These confidence masks are used to produce constrained classification images. The overall accuracy percentage achieved from a study site containing highly disturbed undulating terrain averages 60%. The first method of training, sequential learning of temporal context, is tested by an examination of the step-by-step evolution of the sequential training process. This reveals that the sequential classifier may not have learned about time, because time was constant during each network training session. It also suggests that there are optimal times during the annual cycle to train the classifier for particular floristic classes. The second method of training the classifier is randomised exposure to the entire temporal training suite. Time was now a fluctuating input variable during the network training process. This method produced the best spatially accurate results. The performance of this classifier as a temporally invariant classifier is tested amongst four multi-temporal datasets with encouraging results. The classifier consistently achieved an overall accuracy percentage of 60%. The pairwise predicted

  15. Classification Options

    ERIC Educational Resources Information Center

    Exceptional Children, 1978

    1978-01-01

    The interview presents opinions of Nicholas Hobbs on the classification of exceptional children, including topics such as ecologically oriented classification systems, the role of parents, and need for revision of teacher preparation programs. (IM)

  16. Contextual classification of multispectral image data: Approximate algorithm

    NASA Technical Reports Server (NTRS)

    Tilton, J. C. (Principal Investigator)

    1980-01-01

    An approximation to a classification algorithm incorporating spatial context information in a general, statistical manner is presented which is computationally less intensive. Classifications that are nearly as accurate are produced.

  17. Hubble Classification

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    A classification scheme for galaxies, devised in its original form in 1925 by Edwin P Hubble (1889-1953), and still widely used today. The Hubble classification recognizes four principal types of galaxy—elliptical, spiral, barred spiral and irregular—and arranges these in a sequence that is called the tuning-fork diagram....

  18. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Short Time Exposure (STE) test in conjunction with Bovine Corneal Opacity and Permeability (BCOP) assay including histopathology to evaluate correspondence with the Globally Harmonized System (GHS) eye irritation classification of textile dyes.

    PubMed

    Oliveira, Gisele Augusto Rodrigues; Ducas, Rafael do Nascimento; Teixeira, Gabriel Campos; Batista, Aline Carvalho; Oliveira, Danielle Palma; Valadares, Marize Campos

    2015-09-01

    Eye irritation evaluation is mandatory for predicting health risks in consumers exposed to textile dyes. The two dyes, Reactive Orange 16 (RO16) and Reactive Green 19 (RG19) are classified as Category 2A (irritating to eyes) based on the UN Globally Harmonized System for classification (UN GHS), according to the Draize test. On the other hand, animal welfare considerations and the enforcement of a new regulation in the EU are drawing much attention in reducing or replacing animal experiments with alternative methods. This study evaluated the eye irritation of the two dyes RO16 and RG19 by combining the Short Time Exposure (STE) and the Bovine Corneal Opacity and Permeability (BCOP) assays and then comparing them with in vivo data from the GHS classification. The STE test (first level screening) categorized both dyes as GHS Category 1 (severe irritant). In the BCOP, dye RG19 was also classified as GHS Category 1 while dye RO16 was classified as GHS no prediction can be made. Both dyes caused damage to the corneal tissue as confirmed by histopathological analysis. Our findings demonstrated that the STE test did not contribute to arriving at a better conclusion about the eye irritation potential of the dyes when used in conjunction with the BCOP test. Adding the histopathology to the BCOP test could be an appropriate tool for a more meaningful prediction of the eye irritation potential of dyes.

  1. [The assessment of exposure to and the activity of the manual lifting of patients in wards: methods, procedures, the exposure index (MAPO) and classification criteria. Movimientazione e Assistenza Pazienti Ospedalizzati (Lifting and Assistance to Hospitalized Patients)].

    PubMed

    Menoni, O; Ricci, M G; Panciera, D; Occhipinti, E

    1999-01-01

    Since a method for quantifying exposure to patient handling in hospital wards is lacking, the authors describe and propose a model for identifying the main risk factors in this type of occupational exposure: presence of disabled patients, staff engaged on manual handling of patients, structure of the working environment, equipment and aids for moving patients, training of workers according to the specific risk. For each factor a procedure for identification and assessment is proposed that is easily applicable in practice. The authors also propose a formula for the calculation of a condensed exposure index (MAPO Index), which brings together the various factors. The exposure index, which requires further, detailed study and validation, makes it possible, in practice, to plan the preventive and health measures according to a specific order of priority, thus complying with the requirements of Chapter V of Law 626/94. From a practical point of view, in the present state of knowledge, it can be stated that for MAPO Index values between 0 and 1.5, risk is deemed negligible, average for values between 1.51 and 5, and high for values exceeding 5.

  2. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  3. Remote sensing (normalized difference vegetation index) classification of risk versus minimal risk habitats for human exposure to Ixodes pacificus (Acari: Ixodidae) nymphs in Mendocino County, California.

    PubMed

    Eisen, Rebecca J; Eisen, Lars; Lane, Robert S

    2005-01-01

    In California, Ixodes pacificus Cooley & Kohls nymphs have been implicated as the primary bridging vectors to humans of the spirochetal bacterium causing Lyme disease (Borrelia burgdorferi). Because the nymphs typically do not ascend emergent vegetation, risk of human exposure is minimal in grasslands, chaparral, and woodland-grass. Instead, woodlands with a ground cover dominated by leaf litter (hereinafter referred to as woodland-leaf) have emerged as a primary risk habitat for exposure to B. burgdorferi-infected nymphs. As a means of differentiating woodland-leaf habitats from others with minimal risk (e.g., chaparral, grassland, and woodland-grass), we constructed a maximum likelihood model of these habitat types within a 7,711-ha area in southeastern Mendocino County based on the normalized difference vegetation index derived from Landsat 5 Thematic Mapper imagery (based on a 30 by 30-m pixel size) over four seasons. The overall accuracy of the model to discriminate woodland-leaf, woodland-grass, open grassland, and chaparral was 83.85% (Kappa coefficient of 0.78). Validation of the accuracy of the model to classify woodland-leaf yielded high values both for producer accuracy (93.33% of validated woodland-leaf pixels correctly classified by the model) and user accuracy (96.55% of model-classified validation pixels correctly categorized as woodland-leaf). Woodland-leaf habitats were found to be highly aggregated within the examined area. In conclusion, our model successfully used remotely sensed data as a predictor of habitats where humans are at risk for Lyme disease in the far-western United States. PMID:15691012

  4. Remote sensing (normalized difference vegetation index) classification of risk versus minimal risk habitats for human exposure to Ixodes pacificus (Acari: Ixodidae) nymphs in Mendocino County, California.

    PubMed

    Eisen, Rebecca J; Eisen, Lars; Lane, Robert S

    2005-01-01

    In California, Ixodes pacificus Cooley & Kohls nymphs have been implicated as the primary bridging vectors to humans of the spirochetal bacterium causing Lyme disease (Borrelia burgdorferi). Because the nymphs typically do not ascend emergent vegetation, risk of human exposure is minimal in grasslands, chaparral, and woodland-grass. Instead, woodlands with a ground cover dominated by leaf litter (hereinafter referred to as woodland-leaf) have emerged as a primary risk habitat for exposure to B. burgdorferi-infected nymphs. As a means of differentiating woodland-leaf habitats from others with minimal risk (e.g., chaparral, grassland, and woodland-grass), we constructed a maximum likelihood model of these habitat types within a 7,711-ha area in southeastern Mendocino County based on the normalized difference vegetation index derived from Landsat 5 Thematic Mapper imagery (based on a 30 by 30-m pixel size) over four seasons. The overall accuracy of the model to discriminate woodland-leaf, woodland-grass, open grassland, and chaparral was 83.85% (Kappa coefficient of 0.78). Validation of the accuracy of the model to classify woodland-leaf yielded high values both for producer accuracy (93.33% of validated woodland-leaf pixels correctly classified by the model) and user accuracy (96.55% of model-classified validation pixels correctly categorized as woodland-leaf). Woodland-leaf habitats were found to be highly aggregated within the examined area. In conclusion, our model successfully used remotely sensed data as a predictor of habitats where humans are at risk for Lyme disease in the far-western United States.

  5. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  6. The Influence of Second-Hand Cigarette Smoke Exposure during Childhood and Active Cigarette Smoking on Crohn’s Disease Phenotype Defined by the Montreal Classification Scheme in a Western Cape Population, South Africa

    PubMed Central

    Chivese, Tawanda; Esterhuizen, Tonya M.; Basson, Abigail Raffner

    2015-01-01

    Background Smoking may worsen the disease outcomes in patients with Crohn’s disease (CD), however the effect of exposure to second-hand cigarette smoke during childhood is unclear. In South Africa, no such literature exists. The aim of this study was to investigate whether disease phenotype, at time of diagnosis of CD, was associated with exposure to second-hand cigarette during childhood and active cigarette smoking habits. Methods A cross sectional examination of all consecutive CD patients seen during the period September 2011-January 2013 at 2 large inflammatory bowel disease centers in the Western Cape, South Africa was performed. Data were collected via review of patient case notes, interviewer-administered questionnaire and clinical examination by the attending gastroenterologist. Disease phenotype (behavior and location) was evaluated at time of diagnosis, according to the Montreal Classification scheme. In addition, disease behavior was stratified as ‘complicated’ or ‘uncomplicated’, using predefined definitions. Passive cigarette smoke exposure was evaluated during 3 age intervals: 0–5, 6–10, and 11–18 years. Results One hundred and ninety four CD patients were identified. Cigarette smoking during the 6 months prior to, or at time of diagnosis was significantly associated with ileo-colonic (L3) disease (RRR = 3.63; 95%CI, 1.32–9.98, p = 0.012) and ileal (L1) disease (RRR = 3.54; 95%CI, 1.06–11.83, p = 0.040) compared with colonic disease. In smokers, childhood passive cigarette smoke exposure during the 0–5 years age interval was significantly associated with ileo-colonic CD location (RRR = 21.3; 95%CI, 1.16–391.55, p = 0.040). No significant association between smoking habits and disease behavior at diagnosis, whether defined by the Montreal scheme, or stratified as ‘complicated’ vs ‘uncomplicated’, was observed. Conclusion Smoking habits were associated with ileo-colonic (L3) and ileal (L1) disease at time of diagnosis in

  7. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  8. Learning classification trees

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1991-01-01

    Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. How a tree learning algorithm can be derived from Bayesian decision theory is outlined. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule turns out to be similar to Quinlan's information gain splitting rule, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 and Breiman et al. Cart show the full Bayesian algorithm is consistently as good, or more accurate than these other approaches though at a computational price.

  9. Subject Classification.

    ERIC Educational Resources Information Center

    Thompson, Gayle; And Others

    Three newspaper librarians described how they manage the files of newspaper clippings which are a necessary part of their collections. The development of a new subject classification system for the clippings files was outlined. The new subject headings were based on standard subject heading lists and on local need. It was decided to use a computer…

  10. Classifying Classification

    ERIC Educational Resources Information Center

    Novakowski, Janice

    2009-01-01

    This article describes the experience of a group of first-grade teachers as they tackled the science process of classification, a targeted learning objective for the first grade. While the two-year process was not easy and required teachers to teach in a new, more investigation-oriented way, the benefits were great. The project helped teachers and…

  11. Neuromuscular disease classification system

    NASA Astrophysics Data System (ADS)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  12. Neuromuscular disease classification system.

    PubMed

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns. PMID:23804164

  13. Multisensor classification of sedimentary rocks

    NASA Technical Reports Server (NTRS)

    Evans, Diane

    1988-01-01

    A comparison is made between linear discriminant analysis and supervised classification results based on signatures from the Landsat TM, the Thermal Infrared Multispectral Scanner (TIMS), and airborne SAR, alone and combined into extended spectral signatures for seven sedimentary rock units exposed on the margin of the Wind River Basin, Wyoming. Results from a linear discriminant analysis showed that training-area classification accuracies based on the multisensor data were improved an average of 15 percent over TM alone, 24 percent over TIMS alone, and 46 percent over SAR alone, with similar improvement resulting when supervised multisensor classification maps were compared to supervised, individual sensor classification maps. When training area signatures were used to map spectrally similar materials in an adjacent area, the average classification accuracy improved 19 percent using the multisensor data over TM alone, 2 percent over TIMS alone, and 11 percent over SAR alone. It is concluded that certain sedimentary lithologies may be accurately mapped using a single sensor, but classification of a variety of rock types can be improved using multisensor data sets that are sensitive to different characteristics such as mineralogy and surface roughness.

  14. Familial filicide and filicide classification.

    PubMed

    Guileyardo, J M; Prahlow, J A; Barnard, J J

    1999-09-01

    Filicide is the killing of a child by his or her parent. Despite the disturbing nature of these crimes, a study of filicide classification can provide insight into their causes. Furthermore, a study of filicide classification provides information essential to accurate death certification. We report a rare case of familial filicide in which twin sisters both attempted to kill their respective children. We then suggest a detailed classification of filicide subtypes that provides a framework of motives and precipitating factors leading to filicide. We identify 16 subtypes of filicide, each of which is sufficiently characteristic to warrant a separate category. We describe in some detail the characteristic features of these subtypes. A knowledge of filicide subtypes contributes to interpretation of difficult cases. Furthermore, to protect potential child homicide victims, it is necessary to know how and why they are killed. Epidemiologic studies using filicide subtypes as their basis could provide information leading to strategies for prevention. PMID:10507800

  15. Historical limitations of determinant based exposure groupings in the rubber manufacturing industry

    PubMed Central

    Vermeulen, R; Kromhout, H

    2005-01-01

    Aims: To study the validity of using a cross-sectional industry-wide exposure survey to develop exposure groupings for epidemiological purposes that extend beyond the time period in which the exposure data were collected. Methods: Exposure determinants were used to group workers into high, medium, and low exposure groups. The contrast of this grouping and other commonly used grouping schemes based on plant and department within this exposure survey and a previously conducted survey within the same industry (and factories) were estimated and compared. Results: Grouping of inhalable and dermal exposure based on exposure determinants resulted in the highest, but still modest, contrast (ε ∼ 0.3). Classifying subjects based on a combination of plant and department resulted in a slightly lower contrast (ε ∼ 0.2). If the determinant based grouping derived from the 1997 exposure survey was used to classify workers in the 1988 survey the average contrast decreased significantly for both exposures (ε ∼ 0.1). On the contrary, the exposure classification based on plant and department increased in contrast (from ε ∼ 0.2 to ε ∼ 0.3) and retained its relative ranking overtime. Conclusions: Although determinant based groupings seem to result in more efficient groupings within a cross-sectional survey, they have to be used with caution as they might result in significant less contrast beyond the studied population or time period. It is concluded that a classification based on plant and department might be more desirable for retrospective studies in the rubber manufacturing industry, as they seem to have more historical relevance and are most likely more accurately recorded historically than information on exposure determinants in a particular industry. PMID:16234406

  16. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  17. Remote Sensing Information Classification

    NASA Technical Reports Server (NTRS)

    Rickman, Douglas L.

    2008-01-01

    This viewgraph presentation reviews the classification of Remote Sensing data in relation to epidemiology. Classification is a way to reduce the dimensionality and precision to something a human can understand. Classification changes SCALAR data into NOMINAL data.

  18. Classification and knowledge

    NASA Technical Reports Server (NTRS)

    Kurtz, Michael J.

    1989-01-01

    Automated procedures to classify objects are discussed. The classification problem is reviewed, and the relation of epistemology and classification is considered. The classification of stellar spectra and of resolved images of galaxies is addressed.

  19. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  20. Evaluation of rock mass classification schemes: a case study from the Bowen Basin, Australia

    NASA Astrophysics Data System (ADS)

    Brook, Martin; Hebblewhite, Bruce; Mitra, Rudrajit

    2016-04-01

    The development of an accurate engineering geological model and adequate knowledge of spatial variation in rock mass conditions are important prerequisites for slope stability analyses, tunnel design, mine planning and risk management. Rock mass classification schemes such as Rock Mass Rating (RMR), Coal Mine Roof Rating (CMRR), Q-system and Roof Strength Index (RSI) have been used for a range of engineering geological applications, including transport tunnels, "hard rock" mining and underground and open-cut coal mines. Often, rock mass classification schemes have been evaluated on subaerial exposures, where weathering has affected joint characteristics and intact strength. In contrast, the focus of this evaluation of the above classification schemes is an underground coal mine in the Bowen Basin, central Queensland, Australia, 15 km east of the town of Moranbah. Rock mass classification was undertaken at 68 sites across the mine. Both the target coal seam and overlying rock show marked spatial variability in terms of RMR, CMRR and Q, but RSI showed limited sensitivity to changes in rock mass condition. Relationships were developed between different parameters with varying degrees of success. A mine-wide analysis of faulting was undertaken, and compared with in situ stress field and local-scale measurements of joint and cleat. While there are no unequivocal relationships between rock mass classification parameters and faulting, a central graben zone shows heterogeneous rock mass properties. The corollary is that if geological features can be accurately defined by remote sensing technologies, then this can assist in predicting rock mass conditions and risk management ahead of development and construction.

  1. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  2. Hyperspectral image classification for mapping agricultural tillage practices

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An efficient classification framework for mapping agricultural tillage practice using hyperspectral remote sensing imagery is proposed, which has the potential to be implemented practically to provide rapid, accurate, and objective surveying data for precision agricultural management and appraisal f...

  3. Supervised Machine Learning for Classification of the Electrophysiological Effects of Chronotropic Drugs on Human Induced Pluripotent Stem Cell-Derived Cardiomyocytes

    PubMed Central

    Heylman, Christopher; Datta, Rupsa; Sobrino, Agua

    2015-01-01

    Supervised machine learning can be used to predict which drugs human cardiomyocytes have been exposed to. Using electrophysiological data collected from human cardiomyocytes with known exposure to different drugs, a supervised machine learning algorithm can be trained to recognize and classify cells that have been exposed to an unknown drug. Furthermore, the learning algorithm provides information on the relative contribution of each data parameter to the overall classification. Probabilities and confidence in the accuracy of each classification may also be determined by the algorithm. In this study, the electrophysiological effects of β–adrenergic drugs, propranolol and isoproterenol, on cardiomyocytes derived from human induced pluripotent stem cells (hiPS-CM) were assessed. The electrophysiological data were collected using high temporal resolution 2-photon microscopy of voltage sensitive dyes as a reporter of membrane voltage. The results demonstrate the ability of our algorithm to accurately assess, classify, and predict hiPS-CM membrane depolarization following exposure to chronotropic drugs. PMID:26695765

  4. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  5. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  6. CHILDREN'S DIETARY EXPOSURES TO CHEMICAL CONTAMINANTS

    EPA Science Inventory

    The Food Quality Protection Act of 1996 requires EPA to more accurately assess children's aggregate exposures to environmental contaminants. Children have unstructured eating behaviors which cause excess exposures as a result of their activities. Determining total dietary intak...

  7. Contribution of various microenvironments to the daily personal exposure to ultrafine particles: Personal monitoring coupled with GPS tracking

    NASA Astrophysics Data System (ADS)

    Bekö, Gabriel; Kjeldsen, Birthe Uldahl; Olsen, Yulia; Schipperijn, Jasper; Wierzbicka, Aneta; Karottki, Dorina Gabriela; Toftum, Jørn; Loft, Steffen; Clausen, Geo

    2015-06-01

    Exposure to ultrafine particles (UFP) may have adverse health effects. Central monitoring stations do not represent the personal exposure to UFP accurately. Few studies have previously focused on personal exposure to UFP. Sixty non-smoking residents living in Copenhagen, Denmark were asked to carry a backpack equipped with a portable monitor, continuously recording particle number concentrations (PN), in order to measure the real-time individual exposure over a period of ˜48 h. A GPS logger was carried along with the particle monitor and allowed us to estimate the contribution of UFP exposure occurring in various microenvironments (residence, during active and passive transport, other indoor and outdoor environments) to the total daily exposure. On average, the fractional contribution of each microenvironment to the daily integrated personal exposure roughly corresponded to the fractions of the day the subjects spent in each microenvironment. The home environment accounted for 50% of the daily personal exposure. Indoor environments other than home or vehicles contributed with ˜40%. The highest median UFP concentration was obtained during passive transport (vehicles). However, being in transit or outdoors contributed 5% or less to the daily exposure. Additionally, the subjects recorded in a diary the periods when they were at home. With this approach, 66% of the total daily exposure was attributable to the home environment. The subjects spent 28% more time at home according to the diary, compared to the GPS. These results may indicate limitations of using diaries, but also possible inaccuracy and miss-classification in the GPS data.

  8. Nanotechnology and Exposure Science

    PubMed Central

    LIOY, PAUL J.; NAZARENKO, YEVGEN; HAN, TAE WON; LIOY, MARY JEAN; MAINELIS, GEDIMINAS

    2014-01-01

    This article discusses the gaps in our understanding of human exposures to nanoparticles stemming from the use of nanotechnology-based consumer products by the general public. It also describes a series of steps that could be taken to characterize such exposures. The suggested steps include classification of the nanotechnology-based products, simulation of realistic exposure patterns, characterization of emissions, analysis of the duration of activities resulting in exposures, and consideration of the bioaccessibility of nanoparticles. In addition, we present a preliminary study with nanotechnology-based cosmetic powders where particle release was studied under realistic powder application conditions. The data demonstrated that when nanotechnology-based cosmetic powders were used, there was a potential for inhaling airborne particles ranging in size from tens of nanometers to tens of micrometers. PMID:21222382

  9. Classification Shell Game.

    ERIC Educational Resources Information Center

    Etzold, Carol

    1983-01-01

    Discusses shell classification exercises. Through keying students advanced from the "I know what a shell looks like" stage to become involved in the classification process: observing, labeling, making decisions about categories, and identifying marine animals. (Author/JN)

  10. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  11. Spectral-spatial hyperspectral classification based on multi-center SAM and MRF

    NASA Astrophysics Data System (ADS)

    Tang, Bo; Liu, Zhi; Xiao, Xiaoyan; Nie, Mingyu; Chang, Jun; Jiang, Wei; Li, Xiaomei; Zheng, Chengyun

    2015-12-01

    In this paper, a novel framework for an accurate spectral-spatial classification of hyperspectral images is proposed to address nonlinear classification problems. The algorithm is based on the spectral angle mapper (SAM), which is achieved by introducing the multi-center model and Markov random fields (MRF) into a probabilistic decision framework to obtain an accurate classification. Experimental comparisons between several traditional classification methods and the proposed MSAM-MRF algorithm have demonstrated that the performance of the proposed MSAM-MRF algorithm outperforms the traditional classification algorithms.

  12. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  13. Cirrhosis Classification Based on Texture Classification of Random Features

    PubMed Central

    Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM. PMID:24707317

  14. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  15. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  16. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  17. Classification, disease, and diagnosis.

    PubMed

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study. PMID:21532133

  18. Classification, disease, and diagnosis.

    PubMed

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  19. Hydrometor classification from 2 dimensional videodisdrometer data

    NASA Astrophysics Data System (ADS)

    Grazioli, J.; Tuia, D.; Monhart, S.; Schneebeli, M.; Raupach, T.; Berne, A.

    2014-02-01

    This paper presents a hydrometeor classification technique based on two-dimensional video disdrometer (2DVD) data. The method provides an estimate of the dominant hydrometeor type falling over time intervals of 60 s during precipitation, using as input the statistical behavior of a set of particle descriptors, calculated for each particle image. The employed supervised algorithm is a support vector machine (SVM), trained over precipitation time steps labeled by visual inspection. In this way, 8 dominant hydrometeor classes could be discriminated. The algorithm achieves accurate classification performances, with median overall accuracies (Cohen's K) of 90% (0.88), and with accuracies higher than 84% for each hydrometeor class.

  20. Acute pancreatitis: international classification and nomenclature.

    PubMed

    Bollen, T L

    2016-02-01

    The incidence of acute pancreatitis (AP) is increasing and it is associated with a major healthcare concern. New insights in the pathophysiology, better imaging techniques, and novel treatment options for complicated AP prompted the update of the 1992 Atlanta Classification. Updated nomenclature for pancreatic collections based on imaging criteria is proposed. Adoption of the newly Revised Classification of Acute Pancreatitis 2012 by radiologists should help standardise reports and facilitate accurate conveyance of relevant findings to referring physicians involved in the care of patients with AP. This review will clarify the nomenclature of pancreatic collections in the setting of AP. PMID:26602933

  1. Automatic classification of blank substrate defects

    NASA Astrophysics Data System (ADS)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  2. Security classification of information

    SciTech Connect

    Quist, A.S.

    1989-09-01

    Certain governmental information must be classified for national security reasons. However, the national security benefits from classifying information are usually accompanied by significant costs -- those due to a citizenry not fully informed on governmental activities, the extra costs of operating classified programs and procuring classified materials (e.g., weapons), the losses to our nation when advances made in classified programs cannot be utilized in unclassified programs. The goal of a classification system should be to clearly identify that information which must be protected for national security reasons and to ensure that information not needing such protection is not classified. This document was prepared to help attain that goal. This document is the first of a planned four-volume work that comprehensively discusses the security classification of information. Volume 1 broadly describes the need for classification, the basis for classification, and the history of classification in the United States from colonial times until World War 2. Classification of information since World War 2, under Executive Orders and the Atomic Energy Acts of 1946 and 1954, is discussed in more detail, with particular emphasis on the classification of atomic energy information. Adverse impacts of classification are also described. Subsequent volumes will discuss classification principles, classification management, and the control of certain unclassified scientific and technical information. 340 refs., 6 tabs.

  3. Security classification of information

    SciTech Connect

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  4. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  5. AGN Zoo and Classifications of Active Galaxies

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg M.

    2015-07-01

    We review the variety of Active Galactic Nuclei (AGN) classes (so-called "AGN zoo") and classification schemes of galaxies by activity types based on their optical emission-line spectrum, as well as other parameters and other than optical wavelength ranges. A historical overview of discoveries of various types of active galaxies is given, including Seyfert galaxies, radio galaxies, QSOs, BL Lacertae objects, Starbursts, LINERs, etc. Various kinds of AGN diagnostics are discussed. All known AGN types and subtypes are presented and described to have a homogeneous classification scheme based on the optical emission-line spectra and in many cases, also other parameters. Problems connected with accurate classifications and open questions related to AGN and their classes are discussed and summarized.

  6. Deriving exposure limits

    NASA Astrophysics Data System (ADS)

    Sliney, David H.

    1990-07-01

    Historically many different agencies and standards organizations have proposed laser occupational exposure limits (EL1s) or maximum permissible exposure (MPE) levels. Although some safety standards have been limited in scope to manufacturer system safety performance standards or to codes of practice most have included occupational EL''s. Initially in the 1960''s attention was drawn to setting EL''s however as greater experience accumulated in the use of lasers and some accident experience had been gained safety procedures were developed. It became clear by 1971 after the first decade of laser use that detailed hazard evaluation of each laser environment was too complex for most users and a scheme of hazard classification evolved. Today most countries follow a scheme of four major hazard classifications as defined in Document WS 825 of the International Electrotechnical Commission (IEC). The classifications and the associated accessible emission limits (AEL''s) were based upon the EL''s. The EL and AEL values today are in surprisingly good agreement worldwide. There exists a greater range of safety requirements for the user for each class of laser. The current MPE''s (i. e. EL''s) and their basis are highlighted in this presentation. 2. 0

  7. Classification of Itch.

    PubMed

    Ständer, Sonja

    2016-01-01

    Chronic pruritus has diverse forms of presentation and can appear not only on normal skin [International Forum for the Study of Itch (IFSI) classification group II], but also in the company of dermatoses (IFSI classification group I). Scratching, a natural reflex, begins in response to itch. Enough damage can be done to the skin by scratching to cause changes in the primary clinical picture, often leading to a clinical picture predominated by the development of chronic scratch lesions (IFSI classification group III). An internationally recognized, standardized classification system was created by the IFSI to not only aid in clarifying terms and definitions, but also to harmonize the global nomenclature for itch. PMID:27578063

  8. Automated Classification of Clinical Incident Types.

    PubMed

    Gupta, Jaiprakash; Koprinska, Irena; Patrick, Jon

    2015-01-01

    We consider the task of automatic classification of clinical incident reports using machine learning methods. Our data consists of 5448 clinical incident reports collected from the Incident Information Management System used by 7 hospitals in the state of New South Wales in Australia. We evaluate the performance of four classification algorithms: decision tree, naïve Bayes, multinomial naïve Bayes and support vector machine. We initially consider 13 classes (incident types) that were then reduced to 12, and show that it is possible to build accurate classifiers. The most accurate classifier was the multinomial naïve Bayes achieving accuracy of 80.44% and AUC of 0.91. We also investigate the effect of class labelling by an ordinary clinician and an expert, and show that when the data is labelled by an expert the classification performance of all classifiers improves. We found that again the best classifier was multinomial naïve Bayes achieving accuracy of 81.32% and AUC of 0.97. Our results show that some classes in the Incident Information Management System such as Primary Care are not distinct and their removal can improve performance; some other classes such as Aggression Victim are easier to classify than others such as Behavior and Human Performance. In summary, we show that the classification performance can be improved by expert class labelling of the training data, removing classes that are not well defined and selecting appropriate machine learning classifiers. PMID:26210423

  9. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... classification. (a) Definition. Derivative classification is the incorporating, paraphrasing, restating...

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. Library Classification 2020

    ERIC Educational Resources Information Center

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  16. Improving classification of psychoses.

    PubMed

    Lawrie, Stephen M; O'Donovan, Michael C; Saks, Elyn; Burns, Tom; Lieberman, Jeffrey A

    2016-04-01

    Psychosis has been recognised as an abnormal state in need of care throughout history and by diverse cultures. Present classifications of psychotic disorder remain based on the presence of specific psychotic symptoms, relative to affective and other symptoms, and their sequence and duration. Although extant diagnostic classifications have restricted validity, they have proven reliability and most clinicians and some patients find them useful. Moreover, these classifications have yet to be replaced by anything better. We propose that an expansion of the subgrouping approach inherent to classification will provide incremental improvement to present diagnostic constructs-as has worked in the rest of medicine. We also propose that subgroups could be created both within and across present diagnostic classifications, taking into consideration the potential value of continuous measures (eg, duration of psychotic symptoms and intelligence quotient). Health-care workers also need to work with service users and carers to develop and adapt approaches to diagnosis that are seen as helpful.

  17. Improving classification of psychoses.

    PubMed

    Lawrie, Stephen M; O'Donovan, Michael C; Saks, Elyn; Burns, Tom; Lieberman, Jeffrey A

    2016-04-01

    Psychosis has been recognised as an abnormal state in need of care throughout history and by diverse cultures. Present classifications of psychotic disorder remain based on the presence of specific psychotic symptoms, relative to affective and other symptoms, and their sequence and duration. Although extant diagnostic classifications have restricted validity, they have proven reliability and most clinicians and some patients find them useful. Moreover, these classifications have yet to be replaced by anything better. We propose that an expansion of the subgrouping approach inherent to classification will provide incremental improvement to present diagnostic constructs-as has worked in the rest of medicine. We also propose that subgroups could be created both within and across present diagnostic classifications, taking into consideration the potential value of continuous measures (eg, duration of psychotic symptoms and intelligence quotient). Health-care workers also need to work with service users and carers to develop and adapt approaches to diagnosis that are seen as helpful. PMID:27063387

  18. 2-Stage Classification Modeling

    1994-11-01

    CIRCUIT2.4 is used to design optimum two-stage classification configurations and operating conditions for energy conservation. It permits simulation of five basic grinding-classification circuits, including one single-stage and four two-stage classification arrangements. Hydrocyclones, spiral classifiers, and sieve band screens can be simulated, and the user may choose the combination of devices for the flowsheet simulation. In addition, the user may select from four classification modeling methods to achieve the goals of a simulation project using themore » most familiar concepts. Circuit performance is modeled based on classification parameters or equipment operating conditions. A modular approach was taken in designing the program, which allows future addition of other models with relatively minor changes.« less

  19. 2-Stage Classification Modeling

    SciTech Connect

    Baltich, L. K.

    1994-11-01

    CIRCUIT2.4 is used to design optimum two-stage classification configurations and operating conditions for energy conservation. It permits simulation of five basic grinding-classification circuits, including one single-stage and four two-stage classification arrangements. Hydrocyclones, spiral classifiers, and sieve band screens can be simulated, and the user may choose the combination of devices for the flowsheet simulation. In addition, the user may select from four classification modeling methods to achieve the goals of a simulation project using the most familiar concepts. Circuit performance is modeled based on classification parameters or equipment operating conditions. A modular approach was taken in designing the program, which allows future addition of other models with relatively minor changes.

  20. Linkage of the California Pesticide Use Reporting Database with Spatial Land Use Data for Exposure Assessment

    PubMed Central

    Nuckols, John R.; Gunier, Robert B.; Riggs, Philip; Miller, Ryan; Reynolds, Peggy; Ward, Mary H.

    2007-01-01

    Background The State of California maintains a comprehensive Pesticide Use Reporting Database (CPUR). The California Department of Water Resources (CDWR) maps all crops in agricultural counties in California about once every 5 years. Objective We integrated crop maps with CPUR to more accurately locate where pesticides are applied and evaluated the effects for exposure assessment. Methods We mapped 577 residences and used the CPUR and CDWR data to compute two exposure metrics based on putative pesticide use within a 500-m buffer. For the CPUR metric, we assigned pesticide exposure to the residence proportionally for all square-mile Sections that intersected the buffer. For the CDWR metric, we linked CPUR crop-specific pesticide use to crops mapped within the buffer and assigned pesticide exposure. We compared the metrics for six pesticides: simazine, trifluralin (herbicides), dicofol, propargite (insecticides), methyl bromide, and metam sodium (fumigants). Results For all six pesticides we found good agreement (88–98%) as to whether the pesticide use was predicted. When we restricted the analysis to residences with reported pesticide use in Sections within 500 m, agreement was greatly reduced (35–58%). The CPUR metric estimates of pesticide use within 500 m were significantly higher than the CDWR metric for all six pesticides. Conclusions Our findings may have important implications for exposure classification in epidemiologic studies of agricultural pesticide use using CPUR. There is a need to conduct environmental and biological measurements to ascertain which, if any, of these metrics best represent exposure. PMID:17520053

  1. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  2. [Conformal radiotherapy: principles and classification].

    PubMed

    Rosenwald, J C; Gaboriaud, G; Pontvert, D

    1999-01-01

    'Conformal radiotherapy' is the name fixed by usage and given to a new form of radiotherapy resulting from the technological improvements observed during, the last ten years. While this terminology is now widely used, no precise definition can be found in the literature. Conformal radiotherapy refers to an approach in which the dose distribution is more closely 'conformed' or adapted to the actual shape of the target volume. However, the achievement of a consensus on a more specific definition is hampered by various difficulties, namely in characterizing the degree of 'conformality'. We have therefore suggested a classification scheme be established on the basis of the tools and the procedures actually used for all steps of the process, i.e., from prescription to treatment completion. Our classification consists of four levels: schematically, at level 0, there is no conformation (rectangular fields); at level 1, a simple conformation takes place, on the basis of conventional 2D imaging; at level 2, a 3D reconstruction of the structures is used for a more accurate conformation; and level 3 includes research and advanced dynamic techniques. We have used our personal experience, contacts with colleagues and data from the literature to analyze all the steps of the planning process, and to define the tools and procedures relevant to a given level. The corresponding tables have been discussed and approved at the European level within the Dynarad concerted action. It is proposed that the term 'conformal radiotherapy' be restricted to procedures where all steps are at least at level 2.

  3. Intraregional classification of wine via ICP-MS elemental fingerprinting.

    PubMed

    Coetzee, P P; van Jaarsveld, F P; Vanhaecke, F

    2014-12-01

    The feasibility of elemental fingerprinting in the classification of wines according to their provenance vineyard soil was investigated in the relatively small geographical area of a single wine district. Results for the Stellenbosch wine district (Western Cape Wine Region, South Africa), comprising an area of less than 1,000 km(2), suggest that classification of wines from different estates (120 wines from 23 estates) is indeed possible using accurate elemental data and multivariate statistical analysis based on a combination of principal component analysis, cluster analysis, and discriminant analysis. This is the first study to demonstrate the successful classification of wines at estate level in a single wine district in South Africa. The elements B, Ba, Cs, Cu, Mg, Rb, Sr, Tl and Zn were identified as suitable indicators. White and red wines were grouped in separate data sets to allow successful classification of wines. Correlation between wine classification and soil type distributions in the area was observed. PMID:24996361

  4. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  5. Intraregional classification of wine via ICP-MS elemental fingerprinting.

    PubMed

    Coetzee, P P; van Jaarsveld, F P; Vanhaecke, F

    2014-12-01

    The feasibility of elemental fingerprinting in the classification of wines according to their provenance vineyard soil was investigated in the relatively small geographical area of a single wine district. Results for the Stellenbosch wine district (Western Cape Wine Region, South Africa), comprising an area of less than 1,000 km(2), suggest that classification of wines from different estates (120 wines from 23 estates) is indeed possible using accurate elemental data and multivariate statistical analysis based on a combination of principal component analysis, cluster analysis, and discriminant analysis. This is the first study to demonstrate the successful classification of wines at estate level in a single wine district in South Africa. The elements B, Ba, Cs, Cu, Mg, Rb, Sr, Tl and Zn were identified as suitable indicators. White and red wines were grouped in separate data sets to allow successful classification of wines. Correlation between wine classification and soil type distributions in the area was observed.

  6. Classification of earth terrain using polarimetric synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Lim, H. H.; Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Shin, R. T.; Van Zyl, J. J.

    1989-01-01

    Supervised and unsupervised classification techniques are developed and used to classify the earth terrain components from SAR polarimetric images of San Francisco Bay and Traverse City, Michigan. The supervised techniques include the Bayes classifiers, normalized polarimetric classification, and simple feature classification using discriminates such as the absolute and normalized magnitude response of individual receiver channel returns and the phase difference between receiver channels. An algorithm is developed as an unsupervised technique which classifies terrain elements based on the relationship between the orientation angle and the handedness of the transmitting and receiving polariation states. It is found that supervised classification produces the best results when accurate classifier training data are used, while unsupervised classification may be applied when training data are not available.

  7. Behavior Based Social Dimensions Extraction for Multi-Label Classification.

    PubMed

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  8. Hyperspectral Data Classification Using Factor Graphs

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Müller, R.; Palubinskas, G.; Reinartz, P.

    2012-07-01

    Accurate classification of hyperspectral data is still a competitive task and new classification methods are developed to achieve desired tasks of hyperspectral data use. The objective of this paper is to develop a new method for hyperspectral data classification ensuring the classification model properties like transferability, generalization, probabilistic interpretation, etc. While factor graphs (undirected graphical models) are unfortunately not widely employed in remote sensing tasks, these models possess important properties such as representation of complex systems to model estimation/decision making tasks. In this paper we present a new method for hyperspectral data classification using factor graphs. Factor graph (a bipartite graph consisting of variables and factor vertices) allows factorization of a more complex function leading to definition of variables (employed to store input data), latent variables (allow to bridge abstract class to data), and factors (defining prior probabilities for spectral features and abstract classes; input data mapping to spectral features mixture and further bridging of the mixture to an abstract class). Latent variables play an important role by defining two-level mapping of the input spectral features to a class. Configuration (learning) on training data of the model allows calculating a parameter set for the model to bridge the input data to a class. The classification algorithm is as follows. Spectral bands are separately pre-processed (unsupervised clustering is used) to be defined on a finite domain (alphabet) leading to a representation of the data on multinomial distribution. The represented hyperspectral data is used as input evidence (evidence vector is selected pixelwise) in a configured factor graph and an inference is run resulting in the posterior probability. Variational inference (Mean field) allows to obtain plausible results with a low calculation time. Calculating the posterior probability for each class

  9. Cancer classification based on gene expression using neural networks.

    PubMed

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H

    2015-12-21

    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  10. Methodology for hyperspectral image classification using novel neural network

    SciTech Connect

    Subramanian, S., Gat, N., Sheffield, M.,; Barhen, J.; Toomarian, N.

    1997-04-01

    A novel feed forward neural network is used to classify hyperspectral data from the AVIRIS sector. The network applies an alternating direction singular value decomposition technique to achieve rapid training times (few seconds per class). Very few samples (10-12) are required for training. 100% accurate classification is obtained using test data sets. The methodology combines this rapid training neural network together with data reduction and maximal feature separation techniques such as principal component analysis and simultaneous diagonalization of covariance matrices, for rapid and accurate classification of large hyperspectral images. The results are compared to those of standard statistical classifiers. 21 refs., 3 figs., 5 tabs.

  11. Military Exposures

    MedlinePlus

    ... Index Agent Orange Agent Orange Home Facts about Herbicides Veterans' Diseases Birth Defects Benefits Exposure Locations Provider ... for Providers Diagnosis and Treatment of Exposure Health Effects More Provider Resources » return to top Get Email ...

  12. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  13. An Easy Classification System

    ERIC Educational Resources Information Center

    Bryan, R. C.

    1973-01-01

    File folders can be used effectively to develop and retrieve information about animal classification systems. Animal characters are drawn separately on the front side of a file cover and holes are punched next to each character. (PS)

  14. Accurate heart rate estimation from camera recording via MUSIC algorithm.

    PubMed

    Fouladi, Seyyed Hamed; Balasingham, Ilangko; Ramstad, Tor Audun; Kansanen, Kimmo

    2015-01-01

    In this paper, we propose an algorithm to extract heart rate frequency from video camera using the Multiple SIgnal Classification (MUSIC) algorithm. This leads to improved accuracy of the estimated heart rate frequency in cases the performance is limited by the number of samples and frame rate. Monitoring vital signs remotely can be exploited for both non-contact physiological and psychological diagnosis. The color variation recorded by ordinary cameras is used for heart rate monitoring. The orthogonality between signal space and noise space is used to find more accurate heart rate frequency in comparison with traditional methods. It is shown via experimental results that the limitation of previous methods can be overcome by using subspace methods. PMID:26738015

  15. Supernova Photometric Lightcurve Classification

    NASA Astrophysics Data System (ADS)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  16. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  17. Formaldehyde exposure and acute health effects study

    SciTech Connect

    Quackenboss, J.J.; Lebowitz, M.D.; Michaud, J.P.; Bronnimann, D. )

    1989-01-01

    To assess the effects of formaldehyde exposures on health, exposure groups were defined using baseline exposure and health questionnaires. Formaldehyde concentrations were poorly correlated with these exposure classifications, perhaps due to the time delay between classification and monitoring. The 151 households reported here had a mean HCHO concentration of 35 (S.E. 1.5 and median 30) {mu}g/m{sup 3}. Passive samplers prepared in our lab were calibrated in a chamber to derive an estimated sampling rate of 0.311 {mu}g/(mg {center dot} m{sup {minus}3} {center dot} hr). They were also compared to commercially available samplers inside of the homes, with a correlation coefficient of 0.896 and mean difference of 2.6 {mu}g/m{sup 3}. In this report of initial findings from an ongoing study, daily symptoms and peak expiratory flow measurements were compared with an HCHO exposure classification based on the median measured concentrations. None of the symptoms groups were related to HCHO exposure when controlling for age and sex. There was a significant relationship between HCHO exposure and variability in peak expiratory flows that was dependent on age group. It may be especially important to assess the variability in reactive individuals and children to determine the short-term effects of HCHO exposures and possible long-term consequences.

  18. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  19. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection. PMID:26353275

  20. INVENTORY AND CLASSIFICATION OF GREAT LAKES COASTAL WETLANDS FOR MONITORING AND ASSESSMENT AT LARGE SPATIAL SCALES

    EPA Science Inventory

    Monitoring aquatic resources for regional assessments requires an accurate and comprehensive inventory of the resource and useful classification of exosystem similarities. Our research effort to create an electronic database and work with various ways to classify coastal wetlands...

  1. RELIABILITY OF BIOMARKERS OF PESTICIDE EXPOSURE AMONG CHILDREN AND ADULTS IN CTEPP OHIO

    EPA Science Inventory

    Urinary biomarkers offer the potential for providing an efficient tool for exposure classification by reflecting the aggregate of all exposure routes. Substantial variability observed in urinary pesticide metabolite concentrations over short periods of time, however, has cast so...

  2. MODELING POPULATION EXPOSURES TO OUTDOOR SOURCES OF HAZARDOUS AIR POLLUTANTS

    EPA Science Inventory

    Accurate assessment of human exposures is an important part of environmental health effects research. However, most air pollution epidemiology studies rely upon imperfect surrogates of personal exposures, such as information based on available central-site outdoor concentration ...

  3. ANALYSIS OF DISCRIMINATING FACTORS IN HUMAN ACTIVITIES THAT AFFECT EXPOSURE

    EPA Science Inventory

    Accurately modeling exposure to particulate matter (PM) and other pollutants ultimately involves the utilization of human location-activity databases to assist in understanding the potential variability of microenvironmental exposures. This paper critically considers and stati...

  4. Manual classification strategies in the ECOD database

    PubMed Central

    Cheng, Hua; Liao, Yuxing; Schaeffer, R. Dustin; Grishin, Nick V.

    2015-01-01

    ECOD (Evolutionary Classification Of protein Domains) is a comprehensive and up-to-date protein structure classification database. The majority of new structures released from the PDB (Protein Data Bank) every week already have close homologs in the ECOD hierarchy and thus can be reliably partitioned into domains and classified by software without manual intervention. However, those proteins that lack confidently detectable homologs require careful analysis by experts. Although many bioinformatics resources rely on expert curation to some degree, specific examples of how this curation occurs and in what cases it is necessary are not always described. Here, we illustrate the manual classification strategy in ECOD by example, focusing on two major issues in protein classification: domain partitioning and the relationship between homology and similarity scores. Most examples show recently released and manually classified PDB structures. We discuss multi-domain proteins, discordance between sequence and structural similarities, difficulties with assessing homology with scores, and integral membrane proteins homologous to soluble proteins. By timely assimilation of newly available structures into its hierarchy, ECOD strives to provide a most accurate and updated view of the protein structure world as a result of combined computational and expert-driven analysis. PMID:25917548

  5. Spatial Pattern Classification for More Accurate Forecasting of Variable Energy Resources

    NASA Astrophysics Data System (ADS)

    Novakovskaia, E.; Hayes, C.; Collier, C.

    2014-12-01

    The accuracy of solar and wind forecasts is becoming increasingly essential as grid operators continue to integrate additional renewable generation onto the electric grid. Forecast errors affect rate payers, grid operators, wind and solar plant maintenance crews and energy traders through increases in prices, project down time or lost revenue. While extensive and beneficial efforts were undertaken in recent years to improve physical weather models for a broad spectrum of applications these improvements have generally not been sufficient to meet the accuracy demands of system planners. For renewables, these models are often used in conjunction with additional statistical models utilizing both meteorological observations and the power generation data. Forecast accuracy can be dependent on specific weather regimes for a given location. To account for these dependencies it is important that parameterizations used in statistical models change as the regime changes. An automated tool, based on an artificial neural network model, has been developed to identify different weather regimes as they impact power output forecast accuracy at wind or solar farms. In this study, improvements in forecast accuracy were analyzed for varying time horizons for wind farms and utility-scale PV plants located in different geographical regions.

  6. Using minimum DNA marker loci for accurate population classification in rice (Oryza sativa L.)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Using few DNA markers to classify genetic background of a germplasm pool will help breeders make a quick decision while saving time and resources. WHICHLOCI is a computer program that selects the best combination of loci for population assignment through empiric analysis of molecular marker data. Th...

  7. Concepts of Classification and Taxonomy Phylogenetic Classification

    NASA Astrophysics Data System (ADS)

    Fraix-Burnet, D.

    2016-05-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  8. Brain extraction based on locally linear representation-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Jiang, Jun; Wu, Yao; Zhang, Yu; Chen, Wufan; Feng, Qianjin

    2014-05-15

    Brain extraction is an important procedure in brain image analysis. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remains challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. To address this problem, we present a Locally Linear Representation-based Classification (LLRC) method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimer's Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four publicly available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX), and is comparable to the performance of BEaST, while being more accurate on some datasets compared with BEaST. PMID:24525169

  9. Brain extraction based on locally linear representation-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Jiang, Jun; Wu, Yao; Zhang, Yu; Chen, Wufan; Feng, Qianjin

    2014-05-15

    Brain extraction is an important procedure in brain image analysis. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remains challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. To address this problem, we present a Locally Linear Representation-based Classification (LLRC) method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimer's Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four publicly available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX), and is comparable to the performance of BEaST, while being more accurate on some datasets compared with BEaST.

  10. Classification, change-detection and accuracy assessment: Toward fuller automation

    NASA Astrophysics Data System (ADS)

    Podger, Nancy E.

    This research aims to automate methods for conducting change detection studies using remotely sensed images. Five major objectives were tested on two study sites, one encompassing Madison, Wisconsin, and the other Fort Hood, Texas. (Objective 1) Enhance accuracy assessments by estimating standard errors using bootstrap analysis. Bootstrap estimates of the standard errors were found to be comparable to parametric statistical estimates. Also, results show that bootstrapping can be used to evaluate the consistency of a classification process. (Objective 2) Automate the guided clustering classifier. This research shows that the guided clustering classification process can be automated while maintaining highly accurate results. Three different evaluation methods were used. (Evaluation 1) Appraised the consistency of 25 classifications produced from the automated system. The classifications differed from one another by only two to four percent. (Evaluation 2) Compared accuracies produced by the automated system to classification accuracies generated following a manual guided clustering protocol. Results: The automated system produced higher overall accuracies in 50 percent of the tests and was comparable for all but one of the remaining tests. (Evaluation 3) Assessed the time and effort required to produce accurate classifications. Results: The automated system produced classifications in less time and with less effort than the manual 'protocol' method. (Objective 3) Built a flexible, interactive software tool to aid in producing binary change masks. (Objective 4) Reduced by automation the amount of training data needed to classify the second image of a two-time-period change detection project. Locations of the training sites in 'unchanged' areas employed to classify the first image were used to identify sites where spectral information was automatically extracted from the second image. Results: The automatically generated training data produces classification accuracies

  11. Segmentation assisted food classification for dietary assessment

    NASA Astrophysics Data System (ADS)

    Zhu, Fengqing; Bosch, Marc; Schap, TusaRebecca; Khanna, Nitin; Ebert, David S.; Boushey, Carol J.; Delp, Edward J.

    2011-03-01

    Accurate methods and tools to assess food and nutrient intake are essential for the association between diet and health. Preliminary studies have indicated that the use of a mobile device with a built-in camera to obtain images of the food consumed may provide a less burdensome and more accurate method for dietary assessment. We are developing methods to identify food items using a single image acquired from the mobile device. Our goal is to automatically determine the regions in an image where a particular food is located (segmentation) and correctly identify the food type based on its features (classification or food labeling). Images of foods are segmented using Normalized Cuts based on intensity and color. Color and texture features are extracted from each segmented food region. Classification decisions for each segmented region are made using support vector machine methods. The segmentation of each food region is refined based on feedback from the output of classifier to provide more accurate estimation of the quantity of food consumed.

  12. Proteomic applications of automated GPCR classification.

    PubMed

    Davies, Matthew N; Gloriam, David E; Secker, Andrew; Freitas, Alex A; Mendao, Miguel; Timmis, Jon; Flower, Darren R

    2007-08-01

    The G-protein coupled receptor (GPCR) superfamily fulfils various metabolic functions and interacts with a diverse range of ligands. There is a lack of sequence similarity between the six classes that comprise the GPCR superfamily. Moreover, most novel GPCRs found have low sequence similarity to other family members which makes it difficult to infer properties from related receptors. Many different approaches have been taken towards developing efficient and accurate methods for GPCR classification, ranging from motif-based systems to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of their amino acid sequences. This review describes the inherent difficulties in developing a GPCR classification algorithm and includes techniques previously employed in this area. PMID:17639603

  13. Evaluation of the Unified Compensation and Classification Plan.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL. Office of Educational Accountability.

    The Unified Classification and Compensation Plan of the Dade County (Florida) Public Schools consists of four interdependent activities that include: (1) developing and maintaining accurate job descriptions, (2) conducting evaluations that recommend job worth and grade, (3) developing and maintaining rates of compensation for job values, and (4)…

  14. Preprocessing remotely-sensed data for efficient analysis and classification

    SciTech Connect

    Kelly, P.M.; White, J.M.

    1993-02-01

    Interpreting remotely-sensed data typically requires expensive, specialized computing machinery capable of storing and manipulating large amounts of data quickly. In this paper, we present a method for accurately analyzing and categorizing remotely-sensed data on much smaller, less expensive platforms. Data size is reduced in such a way an efficient, interactive method of data classification.

  15. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  16. Fuzzy C-means classification for corrosion evolution of steel images

    NASA Astrophysics Data System (ADS)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures

  17. Estimating classification images with generalized linear and additive models.

    PubMed

    Knoblauch, Kenneth; Maloney, Laurence T

    2008-12-22

    Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.

  18. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    NASA Astrophysics Data System (ADS)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  19. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    PubMed Central

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-01-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates. PMID:25844042

  20. Vertebral fracture classification

    NASA Astrophysics Data System (ADS)

    de Bruijne, Marleen; Pettersen, Paola C.; Tankó, László B.; Nielsen, Mads

    2007-03-01

    A novel method for classification and quantification of vertebral fractures from X-ray images is presented. Using pairwise conditional shape models trained on a set of healthy spines, the most likely unfractured shape is estimated for each of the vertebrae in the image. The difference between the true shape and the reconstructed normal shape is an indicator for the shape abnormality. A statistical classification scheme with the two shapes as features is applied to detect, classify, and grade various types of deformities. In contrast with the current (semi-)quantitative grading strategies this method takes the full shape into account, it uses a patient-specific reference by combining population-based information on biological variation in vertebra shape and vertebra interrelations, and it provides a continuous measure of deformity. Good agreement with manual classification and grading is demonstrated on 204 lateral spine radiographs with in total 89 fractures.

  1. Classification images with uncertainty

    PubMed Central

    Tjan, Bosco S.; Nandy, Anirvan S.

    2009-01-01

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a “haze” that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms. PMID:16889477

  2. Classification images with uncertainty.

    PubMed

    Tjan, Bosco S; Nandy, Anirvan S

    2006-04-04

    Classification image and other similar noise-driven linear methods have found increasingly wider applications in revealing psychophysical receptive field structures or perceptual templates. These techniques are relatively easy to deploy, and the results are simple to interpret. However, being a linear technique, the utility of the classification-image method is believed to be limited. Uncertainty about the target stimuli on the part of an observer will result in a classification image that is the superposition of all possible templates for all the possible signals. In the context of a well-established uncertainty model, which pools the outputs of a large set of linear frontends with a max operator, we show analytically, in simulations, and with human experiments that the effect of intrinsic uncertainty can be limited or even eliminated by presenting a signal at a relatively high contrast in a classification-image experiment. We further argue that the subimages from different stimulus-response categories should not be combined, as is conventionally done. We show that when the signal contrast is high, the subimages from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal, relatively unaffected by uncertainty. The subimages also contain a "haze" that is of a much lower contrast and is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of spatial uncertainty, we show that the spatial extent of the uncertainty can be estimated from the classification subimages. We link intrinsic uncertainty to invariance and suggest that this signal-clamped classification-image method will find general applications in uncovering the underlying representations of high-level neural and psychophysical mechanisms.

  3. Personal Exposure Monitoring Wearing Protocol Compliance: An Initial Assessment of Quantitative Measurements

    EPA Science Inventory

    Personal exposure sampling provides the most accurate and representative assessment of exposure to a pollutant, but only if measures are implemented to minimize exposure misclassification and reduce confounders that may cause misinterpretation of the collected data. Poor complian...

  4. Sparse representation-based classification scheme for motor imagery-based brain-computer interface systems

    NASA Astrophysics Data System (ADS)

    Shin, Younghak; Lee, Seungchan; Lee, Junho; Lee, Heung-No

    2012-10-01

    Motor imagery (MI)-based brain-computer interface systems (BCIs) normally use a powerful spatial filtering and classification method to maximize their performance. The common spatial pattern (CSP) algorithm is a widely used spatial filtering method for MI-based BCIs. In this work, we propose a new sparse representation-based classification (SRC) scheme for MI-based BCI applications. Sensorimotor rhythms are extracted from electroencephalograms and used for classification. The proposed SRC method utilizes the frequency band power and CSP algorithm to extract features for classification. We analyzed the performance of the new method using experimental datasets. The results showed that the SRC scheme provides highly accurate classification results, which were better than those obtained using the well-known linear discriminant analysis classification method. The enhancement of the proposed method in terms of the classification accuracy was verified using cross-validation and a statistical paired t-test (p < 0.001).

  5. Contribution of various microenvironments to the daily personal exposure to ultrafine particles: Personal monitoring coupled with GPS tracking

    NASA Astrophysics Data System (ADS)

    Bekö, Gabriel; Kjeldsen, Birthe Uldahl; Olsen, Yulia; Schipperijn, Jasper; Wierzbicka, Aneta; Karottki, Dorina Gabriela; Toftum, Jørn; Loft, Steffen; Clausen, Geo

    2015-06-01

    Exposure to ultrafine particles (UFP) may have adverse health effects. Central monitoring stations do not represent the personal exposure to UFP accurately. Few studies have previously focused on personal exposure to UFP. Sixty non-smoking residents living in Copenhagen, Denmark were asked to carry a backpack equipped with a portable monitor, continuously recording particle number concentrations (PN), in order to measure the real-time individual exposure over a period of ∼48 h. A GPS logger was carried along with the particle monitor and allowed us to estimate the contribution of UFP exposure occurring in various microenvironments (residence, during active and passive transport, other indoor and outdoor environments) to the total daily exposure. On average, the fractional contribution of each microenvironment to the daily integrated personal exposure roughly corresponded to the fractions of the day the subjects spent in each microenvironment. The home environment accounted for 50% of the daily personal exposure. Indoor environments other than home or vehicles contributed with ∼40%. The highest median UFP concentration was obtained during passive transport (vehicles). However, being in transit or outdoors contributed 5% or less to the daily exposure. Additionally, the subjects recorded in a diary the periods when they were at home. With this approach, 66% of the total daily exposure was attributable to the home environment. The subjects spent 28% more time at home according to the diary, compared to the GPS. These results may indicate limitations of using diaries, but also possible inaccuracy and miss-classification in the GPS data.

  6. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  7. Environmental occurrence, analysis and human exposure to the flame retardant tetrabromobisphenol-A (TBBP-A)-A review.

    PubMed

    Abou-Elwafa Abdallah, Mohamed

    2016-09-01

    TBBP-A is a high production volume chemical applied widely as a flame retardant in printed circuit boards. Recent studies have raised concern over potential harmful implications of TBBP-A exposure in human and wildlife, leading to its classification under group 2A "Probably carcinogenic to humans" by the International Agency for Research on Cancer. This article provides a comprehensive review of the available literature on TBBP-A analysis, environmental levels and human exposure. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) has been identified as the method of choice for robust, accurate and sensitive analysis of TBBP-A in different matrices. TBBP-A has been detected in almost all environmental compartments all over the world, rendering it a ubiquitous contaminant. Human exposure studies revealed dust ingestion and diet as the major pathways of TBBP-A exposure in the general population. Toddlers are likely to be more exposed than adults via accidental indoor dust ingestion. Moreover, exposure to TBBP-A may occur prenatally and via breast milk. There are no current restrictions on the production of TBBP-A in the EU or worldwide. However, more research is required to characterise human exposure to TBBP-A in and around production facilities, as well as in e-waste recycling regions. PMID:27266836

  8. Flying Insect Detection and Classification with Inexpensive Sensors

    PubMed Central

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-01-01

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered. PMID:25350921

  9. Classification of Variable Stars Using Thick-Pen Transform Method

    NASA Astrophysics Data System (ADS)

    Park, M.; Oh, H.-S.; Kim, D.

    2013-04-01

    A suitable classification of variable stars is an important task for understanding galaxy structure and evaluating stellar evolution. Most traditional approaches for classification have used various features of variable stars such as period, amplitude, color index, and Fourier coefficients. Recently, by focusing only on the light curve shape, Deb and Singh proposed a classification method based on multivariate principal component analysis (PCA). They applied the PCA method to light curves and compared its results with those obtained by Fourier coefficients. In this article, we propose a new procedure based on the thick-pen transform for obtaining accurate information on the light curve shape as well as for improving the accuracy of classification. The proposed method is applied to the data sets of variable stars from the Stellar Astrophysics and Research on Exoplanets (STARE) project and a small number of stars from Massive Compact Halo Objects (MACHO).

  10. Flying insect detection and classification with inexpensive sensors.

    PubMed

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-01-01

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect's flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.

  11. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  12. Classifications for carcinogenesis of antitumoral drugs.

    PubMed

    Binetti, R; Costamagna, F M; Marcello, I

    2003-12-01

    The aim of this review is to support the medical staff engaged in tumor therapy with the carcinogenicity, mutagenicity, developmental toxicity classification of a large number of chemiotherapic drugs by national and international Agencies; it also gives their rationale and the few cases for which the classification varies among, for example, the European Union and the United States of America. A large list of such drugs, producers, commercial names, CAS numbers and chemical names is reported. This list is subject to changes for the quick development in this field: many drugs are retired and many more are introduced in clinical practice. The list is updated to the summer 2003 and retains many drugs which have more than one use or have limited use. The protection of the medical personnel using antitumor chemiotherapics can need retrospective epidemiological investigations and obsolete drugs are of importance for some of the past exposures.

  13. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  14. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  15. [Malignant lymphoma: REAL classification to new WHO classification].

    PubMed

    Kikuchi, M

    2000-03-01

    The Revised European-American Classification of Lymphoid Neoplasms(REAL) proposed in 1994 represented a new paradigm for the classification of lymphomas. This classification emphasized that each disease was a distinct entity, defined by a constellation of clinical and laboratory features, i.e., morphology and genetic features, immunophenotype, clinical presentation, and course. And it also noted that the site(s) of presentation were a signpost for important underlying biologic distinctions. A new WHO classification is planed to be proposed, re-categorizing entities of the REAL classification. Now, WHO members planed to publish the new classification as the bluebook of WHO at first in 1998 and now in 2000. This paper reports mainly different points in the new WHO classification of malignant lymphoma from the REAL classification.

  16. Improving Student Question Classification

    ERIC Educational Resources Information Center

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  17. Homographs: Classification and Identification.

    ERIC Educational Resources Information Center

    Pacak, M.; Henisz, Bozena

    1968-01-01

    Homographs are defined in this study as sets of word forms which are spelled alike but which have entirely or partially different meanings and which may have different syntactic functions (that is, they belong to more than one form class or to more than one subclass of a form class). This report deals with the classification and identification of…

  18. Shark Teeth Classification

    ERIC Educational Resources Information Center

    Brown, Tom; Creel, Sally; Lee, Velda

    2009-01-01

    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  19. Soil Classification and Treatment.

    ERIC Educational Resources Information Center

    Clemson Univ., SC. Vocational Education Media Center.

    This instructional unit was designed to enable students, primarily at the secondary level, to (1) classify soils according to current capability classifications of the Soil Conservation Service, (2) select treatments needed for a given soil class according to current recommendations provided by the Soil Conservation Service, and (3) interpret a…

  20. Equivalent Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    Rupp and Templin (2008) do a good job at describing the ever expanding landscape of Diagnostic Classification Models (DCM). In many ways, their review article clearly points to some of the questions that need to be answered before DCMs can become part of the psychometric practitioners toolkit. Apart from the issues mentioned in this article that…

  1. Teach Classification with Slides.

    ERIC Educational Resources Information Center

    Franks, Deborah

    1980-01-01

    Described is a creative approach to the use of contact slides as a means of student participation in a learning unit on animal classification. The finished product is a slide presentation in which students themselves have made the slides and taped the narration. (CS)

  2. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  3. Refining Time-Activity Classification of Human Subjects Using the Global Positioning System

    PubMed Central

    Hu, Maogui; Li, Wei; Li, Lianfa; Houston, Douglas; Wu, Jun

    2016-01-01

    Background Detailed spatial location information is important in accurately estimating personal exposure to air pollution. Global Position System (GPS) has been widely used in tracking personal paths and activities. Previous researchers have developed time-activity classification models based on GPS data, most of them were developed for specific regions. An adaptive model for time-location classification can be widely applied to air pollution studies that use GPS to track individual level time-activity patterns. Methods Time-activity data were collected for seven days using GPS loggers and accelerometers from thirteen adult participants from Southern California under free living conditions. We developed an automated model based on random forests to classify major time-activity patterns (i.e. indoor, outdoor-static, outdoor-walking, and in-vehicle travel). Sensitivity analysis was conducted to examine the contribution of the accelerometer data and the supplemental spatial data (i.e. roadway and tax parcel data) to the accuracy of time-activity classification. Our model was evaluated using both leave-one-fold-out and leave-one-subject-out methods. Results Maximum speeds in averaging time intervals of 7 and 5 minutes, and distance to primary highways with limited access were found to be the three most important variables in the classification model. Leave-one-fold-out cross-validation showed an overall accuracy of 99.71%. Sensitivities varied from 84.62% (outdoor walking) to 99.90% (indoor). Specificities varied from 96.33% (indoor) to 99.98% (outdoor static). The exclusion of accelerometer and ambient light sensor variables caused a slight loss in sensitivity for outdoor walking, but little loss in overall accuracy. However, leave-one-subject-out cross-validation showed considerable loss in sensitivity for outdoor static and outdoor walking conditions. Conclusions The random forests classification model can achieve high accuracy for the four major time

  4. Free classification of American English dialects by native and non-native listeners

    PubMed Central

    Clopper, Cynthia G.; Bradlow, Ann R.

    2009-01-01

    Most second language acquisition research focuses on linguistic structures, and less research has examined the acquisition of sociolinguistic patterns. The current study explored the perceptual classification of regional dialects of American English by native and non-native listeners using a free classification task. Results revealed similar classification strategies for the native and non-native listeners. However, the native listeners were more accurate overall than the non-native listeners. In addition, the non-native listeners were less able to make use of constellations of cues to accurately classify the talkers by dialect. However, the non-native listeners were able to attend to cues that were either phonologically or sociolinguistically relevant in their native language. These results suggest that non-native listeners can use information in the speech signal to classify talkers by regional dialect, but that their lack of signal-independent cultural knowledge about variation in the second language leads to less accurate classification performance. PMID:20161400

  5. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  6. Comparison of Cramer classification between Toxtree, the OECD QSAR Toolbox and expert judgment.

    PubMed

    Bhatia, Sneha; Schultz, Terry; Roberts, David; Shen, Jie; Kromidas, Lambros; Marie Api, Anne

    2015-02-01

    The Threshold of Toxicological Concern (TTC) is a pragmatic approach in risk assessment. In the absence of data, it sets up levels of human exposure that are considered to have no appreciable risk to human health. The Cramer decision tree is used extensively to determine these exposure thresholds by categorizing non-carcinogenic chemicals into three different structural classes. Therefore, assigning an accurate Cramer class to a material is a crucial step to preserve the integrity of the risk assessment. In this study the Cramer class of over 1000 fragrance materials across diverse chemical classes were determined by using Toxtree (TT), the OECD QSAR Toolbox (TB), and expert judgment. Disconcordance was observed between TT and the TB. A total of 165 materials (16%) showed different results from the two programs. The overall concordance for Cramer classification between TT and expert judgment is 83%, while the concordance between the TB and expert judgment is 77%. Amines, lactones and heterocycles have the lowest percent agreement with expert judgment for TT and the TB. For amines, the expert judgment agreement is 45% for TT and 55% for the TB. For heterocycles, the expert judgment agreement is 55% for TT and the TB. For lactones, the expert judgment agreement is 56% for TT and 50% for the TB. Additional analyses were conducted to determine the concordance within various chemical classes. Critical checkpoints in the decision tree are identified. Strategies and guidance on determining the Cramer class for various chemical classes are discussed.

  7. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  8. Nominated Texture Based Cervical Cancer Classification

    PubMed Central

    Mariarputham, Edwin Jayasingh; Stephen, Allwin

    2015-01-01

    Accurate classification of Pap smear images becomes the challenging task in medical image processing. This can be improved in two ways. One way is by selecting suitable well defined specific features and the other is by selecting the best classifier. This paper presents a nominated texture based cervical cancer (NTCC) classification system which classifies the Pap smear images into any one of the seven classes. This can be achieved by extracting well defined texture features and selecting best classifier. Seven sets of texture features (24 features) are extracted which include relative size of nucleus and cytoplasm, dynamic range and first four moments of intensities of nucleus and cytoplasm, relative displacement of nucleus within the cytoplasm, gray level cooccurrence matrix, local binary pattern histogram, tamura features, and edge orientation histogram. Few types of support vector machine (SVM) and neural network (NN) classifiers are used for the classification. The performance of the NTCC algorithm is tested and compared to other algorithms on public image database of Herlev University Hospital, Denmark, with 917 Pap smear images. The output of SVM is found to be best for the most of the classes and better results for the remaining classes. PMID:25649913

  9. Automated Defect Classification (ADC)

    1998-01-01

    The ADC Software System is designed to provide semiconductor defect feature analysis and defect classification capabilities. Defect classification is an important software method used by semiconductor wafer manufacturers to automate the analysis of defect data collected by a wide range of microscopy techniques in semiconductor wafer manufacturing today. These microscopies (e.g., optical bright and dark field, scanning electron microscopy, atomic force microscopy, etc.) generate images of anomalies that are induced or otherwise appear on wafermore » surfaces as a result of errant manufacturing processes or simple atmospheric contamination (e.g., airborne particles). This software provides methods for analyzing these images, extracting statistical features from the anomalous regions, and applying supervised classifiers to label the anomalies into user-defined categories.« less

  10. The classification of asteroids

    NASA Astrophysics Data System (ADS)

    Davies, J. K.; Eaton, N.; Green, S. F.; McCheyne, R. S.; Meadows, A. J.

    A numerical taxonomy of asteroids is proposed and illustrated for a sample of 82 well-characterized asteroids in the TRIAD file described by Zellner (1979). The growth of different classification schemes, reflecting the rapid increase in knowledge of the physical and orbital properties of asteroids, is traced since about 1970. The proposed system is adapted from a microbiological taxonomy program and uses only physical parameters: albedo, red/blue ratio, visible-spectrum curvature, Fe(2+) absorption near 0.9 microns, U-B, and B-V. A dendrogram is presented and interpreted, and the scheme is found to agree reasonably well with conventional classifications, to allow the incorporation of new kinds of data, and to facilitate the identification of objects with particular charcteristics to plan future observations.

  11. Tree Classification Software

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1993-01-01

    This paper introduces the IND Tree Package to prospective users. IND does supervised learning using classification trees. This learning task is a basic tool used in the development of diagnosis, monitoring and expert systems. The IND Tree Package was developed as part of a NASA project to semi-automate the development of data analysis and modelling algorithms using artificial intelligence techniques. The IND Tree Package integrates features from CART and C4 with newer Bayesian and minimum encoding methods for growing classification trees and graphs. The IND Tree Package also provides an experimental control suite on top. The newer features give improved probability estimates often required in diagnostic and screening tasks. The package comes with a manual, Unix 'man' entries, and a guide to tree methods and research. The IND Tree Package is implemented in C under Unix and was beta-tested at university and commercial research laboratories in the United States.

  12. Automated Defect Classification (ADC)

    SciTech Connect

    1998-01-01

    The ADC Software System is designed to provide semiconductor defect feature analysis and defect classification capabilities. Defect classification is an important software method used by semiconductor wafer manufacturers to automate the analysis of defect data collected by a wide range of microscopy techniques in semiconductor wafer manufacturing today. These microscopies (e.g., optical bright and dark field, scanning electron microscopy, atomic force microscopy, etc.) generate images of anomalies that are induced or otherwise appear on wafer surfaces as a result of errant manufacturing processes or simple atmospheric contamination (e.g., airborne particles). This software provides methods for analyzing these images, extracting statistical features from the anomalous regions, and applying supervised classifiers to label the anomalies into user-defined categories.

  13. SLC classification: an update.

    PubMed

    Schlessinger, A; Yee, S W; Sali, A; Giacomini, K M

    2013-07-01

    The 386 human SLC superfamily members are diverse in sequence, structure, and function. Using sequence similarity, we previously classified the SLC superfamily members and identified relationships among families. With the recent determination of new SLC structures and identification of previously unknown human SLC families, an update of our previous classification is timely. Here, we comprehensively compare the SLC sequences and structures and discuss the applicability of structure-based ligand discovery to key SLC members.

  14. Granular loess classification based

    SciTech Connect

    Browzin, B.S.

    1985-05-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess.

  15. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  16. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  17. Classification of mental disorders*

    PubMed Central

    Stengel, E.

    1959-01-01

    One of the fundamental difficulties in devising a classification of mental disorders is the lack of agreement among psychiatrists regarding the concepts upon which it should be based: diagnoses can rarely be verified objectively and the same or similar conditions are described under a confusing variety of names. This situation militates against the ready exchange of ideas and experiences and hampers progress. As a first step towards remedying this state of affairs, the author of the article below has undertaken a critical survey of existing classifications. He shows how some of the difficulties created by lack of knowledge regarding pathology and etiology may be overcome by the use of “operational definitions” and outlines the basic principles on which he believes a generally acceptable international classification might be constructed. If this can be done it should lead to a greater measure of agreement regarding the value of specific treatments for mental disorders and greatly facilitate a broad epidemiological approach to psychiatric research. PMID:13834299

  18. Seismic event classification system

    DOEpatents

    Dowla, Farid U.; Jarpe, Stephen P.; Maurer, William

    1994-01-01

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities.

  19. Histologic classification of gliomas.

    PubMed

    Perry, Arie; Wesseling, Pieter

    2016-01-01

    Gliomas form a heterogeneous group of tumors of the central nervous system (CNS) and are traditionally classified based on histologic type and malignancy grade. Most gliomas, the diffuse gliomas, show extensive infiltration in the CNS parenchyma. Diffuse gliomas can be further typed as astrocytic, oligodendroglial, or rare mixed oligodendroglial-astrocytic of World Health Organization (WHO) grade II (low grade), III (anaplastic), or IV (glioblastoma). Other gliomas generally have a more circumscribed growth pattern, with pilocytic astrocytomas (WHO grade I) and ependymal tumors (WHO grade I, II, or III) as the most frequent representatives. This chapter provides an overview of the histology of all glial neoplasms listed in the WHO 2016 classification, including the less frequent "nondiffuse" gliomas and mixed neuronal-glial tumors. For multiple decades the histologic diagnosis of these tumors formed a useful basis for assessment of prognosis and therapeutic management. However, it is now fully clear that information on the molecular underpinnings often allows for a more robust classification of (glial) neoplasms. Indeed, in the WHO 2016 classification, histologic and molecular findings are integrated in the definition of several gliomas. As such, this chapter and Chapter 6 are highly interrelated and neither should be considered in isolation. PMID:26948349

  20. Seismic event classification system

    DOEpatents

    Dowla, F.U.; Jarpe, S.P.; Maurer, W.

    1994-12-13

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities. 21 figures.

  1. Exposure to captan in fruit growing.

    PubMed

    de Cock, J; Heederik, D; Kromhout, H; Boleij, J S; Hoek, F; Wegh, H; Tjoe Ny, E

    1998-03-01

    This study characterized occupational exposure to pesticides in fruit growing in The Netherlands to assess determinants of exposure. Large-scale exposure surveys were carried out during application of pesticides and during reentry activities. Data on contamination inside the fruit growers' homes were obtained, and total potential exposure for the fruit grower and his family during the growing and harvesting season was estimated. Repeated measurements on the same subject were collected to study components of exposure variability. Relative contribution of the respiratory route and different skin sites to total exposure were assessed. Captan was used as a marker for exposure. Inhalable dust exposure was measured with a personal monitor and potential dermal exposure with skin pads and hand rinsing. Dislodgeable foliar residue was measured by taking leaf punches. For respiratory exposure and potential dermal exposure, differences were observed between several tasks. Workers were categorized according to tasks performed depending on the exposure measure(s) (e.g., hands, forehead, inhalable dust) considered relevant for a specific study purpose. In general, within-worker variability of all exposure measurements was larger than between-worker variability. Variability in dermal exposure on the same body location was small relative to variability between different body locations. Differences in total exposure, including exposure inside the home, between the fruit grower and the son were small. Exposure of the wife was two to three times lower than for the fruit grower and the son. As exposure per unit of time was in the same order of magnitude for different tasks, individual time spent on these tasks is crucial for estimating total potential exposure. Repeated measurements are necessary to estimate individual exposure accurately because of the large within-worker variability.

  2. Classification images predict absolute efficiency.

    PubMed

    Murray, Richard F; Bennett, Patrick J; Sekuler, Allison B

    2005-02-24

    How well do classification images characterize human observers' strategies in perceptual tasks? We show mathematically that from the classification image of a noisy linear observer, it is possible to recover the observer's absolute efficiency. If we could similarly predict human observers' performance from their classification images, this would suggest that the linear model that underlies use of the classification image method is adequate over the small range of stimuli typically encountered in a classification image experiment, and that a classification image captures most important aspects of human observers' performance over this range. In a contrast discrimination task and in a shape discrimination task, we found that observers' absolute efficiencies were generally well predicted by their classification images, although consistently slightly (approximately 13%) higher than predicted. We consider whether a number of plausible nonlinearities can account for the slight under prediction, and of these we find that only a form of phase uncertainty can account for the discrepancy.

  3. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  4. A new classification of ophthalmic disorders with standardized ophthalmic abbreviations.

    PubMed

    Spencer, L M; Spencer, G R

    1990-03-01

    A classification of ocular disorders has been developed that is both comprehensive and easy to use. Each disorder was assigned a unique abbreviation and cross referenced to the International Classification of Diseases, 9th edition (ICD-9). Ophthalmic procedures, medications, and other terms were similarly standardized and abbreviated. The result is a system of ophthalmic terminology that improves the quality of the medical record, facilitates ICD-9 coding, and makes computer data entry faster and more accurate. The system is published as a standard text with companion handbook. A computer program that uses the system also has been developed. PMID:2336279

  5. Rockfall exposures in Montserrat mountain

    NASA Astrophysics Data System (ADS)

    Fontquerni Gorchs, Sara; Vilaplana Fernández, Joan Manuel; Guinau Sellés, Marta; Jesús Royán Cordero, Manuel

    2015-04-01

    This study shows the developed methodology to analyze the exposure level on a 1:25000 scale, and the results obtained by applying it to an important part of the Monataña de Montserrat Natural Park for vehicles with and without considering their occupants. The development of this proposal is part of an ongoing study which focuses more in-depth in the analysis of the rockfall risk exposure in different scales and in different natural and social contexts. This research project applies a methodology to evaluate the rockfall exposure level based on the product of the frequency of occurrence of the event by an exposure function of the vulnerable level on a 1:25,000 scale although the scale used for the study was 1:10,000. The proposed methodology to calculate the exposure level is based on six phases: 1- Identification, classification and inventory of every element potentially under risk. 2- Zoning of the frequency of occurrence of the event in the studied area. 3- Design of the exposure function for each studied element. 4- Obtaining the Exposure index, it can be defined as the product of the frequency of occurrence by the exposure function of the vulnerable element through SIG analysis obtained with ArcGis software (ESRI) 5- Obtaining exposure level by grouping into categories the numerical values of the exposure index. 6- Production of the exposition zoning map. The different types of vulnerable elements considered in the totality of the study are: Vehicles in motion, people in vehicles in motion, people on paths, permanent elements and people in buildings. Each defined typology contains all elements with same characteristics and an exposure function has been designed for each of them. For the exposure calculation, two groups of elements have been considered; firstly the group of elements with no people involved and afterwards same group of elements but with people involved. This is a first comprehensive and synthetic work about rockfall exposure on the Montserrat

  6. ASSESSING EXPOSURE CLASSIFICATION IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    The Agricultural Health Study (AHS) is a prospective epidemiologic study examining cancer and non-cancer health outcomes for over 55,000 pesticide applicators and 34,000 spouses in Iowa and North Carolina. Questionnaires were used to collect information about the use of specific ...

  7. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  8. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  9. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  10. Multispectral rock-type separation and classification.

    SciTech Connect

    Moya, Mary M.; Fogler, Robert Joseph; Paskaleva, Biliana; Hayat, Majeed M.

    2004-06-01

    This paper explores the possibility of separating and classifying remotely-sensed multispectral data from rocks and minerals onto seven geological rock-type groups. These groups are extracted from the general categories of metamorphic, igneous and sedimentary rocks. The study is performed under ideal conditions for which the data is generated according to laboratory hyperspectral data for the members, which are, in turn, passed through the Multi-spectral Thermal Imager (MTI) filters yielding 15 bands. The main challenge in separability is the small size of the training data sets, which initially did not permit direct application of Bayesian decision theory. To enable Bayseian classification, the original training data is linearly perturbed with the addition minerals, vegetation, soil, water and other valid impurities. As a result, the size of the training data is significantly increased and accurate estimates of the covariance matrices are achieved. In addition, a set of reduced (five) linearly-extracted canonical features that are optimal in providing the most important information about the data is determined. An alternative nonlinear feature-selection method is also employed based on spectral indices comprising a small subset of all possible ratios between bands. By applying three optimization strategies, combinations of two and three ratios are found that provide reliable separability and classification between all seven groups according to the Bhattacharyya distance. To set a benchmark to which the MTI capability in rock classification can be compared, an optimization strategy is performed for the selection of optimal multispectral filters, other than the MTI filters, and an improvement in classification is predicted.

  11. IRIS COLOUR CLASSIFICATION SCALES--THEN AND NOW.

    PubMed

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual's eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale.

  12. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  13. [Guidelines for hygienic classification of learning technologies].

    PubMed

    Kuchma, V R; Teksheva, L M; Milushkina, O Iu

    2008-01-01

    Optimization of the educational environment under the present-day conditions has been in progress, by using learning techwares (LTW) without fail. To organize and regulate an academic process in terms of the safety of applied LTW, there is a need for their classification. The currently existing attempts to structure LTW disregard hygienically significant aspects. The task of the present study was to substantiate a LTW safety criterion ensuring a universal approach to working out regulations. This criterion may be the exposure intensity determined by the form of organization of education and its pattern, by the procedure of information presentation, and the age-related peculiarities of a pupil, i.e. by the actual load that is presented by the product of the intensity exposure and its time. The hygienic classification of LTW may be used to evaluate their negative effect in an educational process on the health status of children and adolescents, to regulate hazardous factors and training modes, to design and introduce new learning complexes. The structuring of a LTW system allows one to define possible deleterious actions and the possibilities of preventing this action on the basis of strictly established regulations.

  14. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  15. Novel Cortical Thickness Pattern for Accurate Detection of Alzheimer's Disease.

    PubMed

    Zheng, Weihao; Yao, Zhijun; Hu, Bin; Gao, Xiang; Cai, Hanshu; Moore, Philip

    2015-01-01

    Brain network occupies an important position in representing abnormalities in Alzheimer's disease (AD) and mild cognitive impairment (MCI). Currently, most studies only focused on morphological features of regions of interest without exploring the interregional alterations. In order to investigate the potential discriminative power of a morphological network in AD diagnosis and to provide supportive evidence on the feasibility of an individual structural network study, we propose a novel approach of extracting the correlative features from magnetic resonance imaging, which consists of a two-step approach for constructing an individual thickness network with low computational complexity. Firstly, multi-distance combination is utilized for accurate evaluation of between-region dissimilarity; and then the dissimilarity is transformed to connectivity via calculation of correlation function. An evaluation of the proposed approach has been conducted with 189 normal controls, 198 MCI subjects, and 163 AD patients using machine learning techniques. Results show that the observed correlative feature suggests significant promotion in classification performance compared with cortical thickness, with accuracy of 89.88% and area of 0.9588 under receiver operating characteristic curve. We further improved the performance by integrating both thickness and apolipoprotein E ɛ4 allele information with correlative features. New achieved accuracies are 92.11% and 79.37% in separating AD from normal controls and AD converters from non-converters, respectively. Differences between using diverse distance measurements and various correlation transformation functions are also discussed to explore an optimal way for network establishment. PMID:26444768

  16. Gabor feature-based registration: accurate alignment without fiducial markers

    NASA Astrophysics Data System (ADS)

    Parra, Nestor A.; Parra, Carlos A.

    2007-03-01

    Accurate registration of diagnosis and treatment images is a critical factor for the success of radiotherapy. This study presents a feature-based image registration algorithm that uses a branch- and-bound method to search the space of possible transformations, as well as a Hausdorff distance metric to evaluate their quality. This distance is computed in the space of responses to a circular Gabor filter, in which, for each point of interest in both reference and subject images, a vector of complex responses to different Gabor kernels is computed. Each kernel is generated using different frequencies and variances of the Gabor function, which determines correspondent regions in the images to be registered, by virtue of its rotation invariance characteristics. Responses to circular Gabor filters have also been reported in literature as a successful tool for image classification; and in this particular application we utilize them for patient positioning in cranial radiotherapy. For test purposes, we use 2D portal images acquired with an electronic portal imaging device (EPID). Our method presents EPID-EPID registrations errors under 0.2 mm for translations and 0.05 deg for rotations (subpixel accuracy). We are using fiducial marker registration as the ground truth for comparisons. Registration times average 2.70 seconds based on 1400 feature points using a 1.4 GHz processor.

  17. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  18. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  19. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  20. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  1. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  2. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  3. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  4. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  5. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology.

    PubMed

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-09-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs. PMID:26517655

  6. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology

    PubMed Central

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-01-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs. PMID:26517655

  7. New classification of natural breeding habitats for Neotropical anophelines in the Yanomami Indian Reserve, Amazon Region, Brazil and a new larval sampling methodology.

    PubMed

    Sánchez-Ribas, Jordi; Oliveira-Ferreira, Joseli; Rosa-Freitas, Maria Goreti; Trilla, Lluís; Silva-do-Nascimento, Teresa Fernandes

    2015-09-01

    Here we present the first in a series of articles about the ecology of immature stages of anophelines in the Brazilian Yanomami area. We propose a new larval habitat classification and a new larval sampling methodology. We also report some preliminary results illustrating the applicability of the methodology based on data collected in the Brazilian Amazon rainforest in a longitudinal study of two remote Yanomami communities, Parafuri and Toototobi. In these areas, we mapped and classified 112 natural breeding habitats located in low-order river systems based on their association with river flood pulses, seasonality and exposure to sun. Our classification rendered seven types of larval habitats: lakes associated with the river, which are subdivided into oxbow lakes and nonoxbow lakes, flooded areas associated with the river, flooded areas not associated with the river, rainfall pools, small forest streams, medium forest streams and rivers. The methodology for larval sampling was based on the accurate quantification of the effective breeding area, taking into account the area of the perimeter and subtypes of microenvironments present per larval habitat type using a laser range finder and a small portable inflatable boat. The new classification and new sampling methodology proposed herein may be useful in vector control programs.

  8. Proposed ICDRG Classification of the Clinical Presentation of Contact Allergy.

    PubMed

    Pongpairoj, Korbkarn; Ale, Iris; Andersen, Klaus Ejner; Bruze, Magnus; Diepgen, Thomas L; Elsner, Peter U; Goh, Chee Leok; Goossens, An; Jerajani, Hemangi; Lachapelle, Jean Marie; Lee, Jun Young; Maibach, Howard I; Matsunaga, Kayoko; Nixon, Rosemary; Puangpet, Pailin; Sasseville, Denis; Thaiwat, Supitchaya; McFadden, John P

    2016-01-01

    The International Contact Dermatitis Research Group proposes a classification for the clinical presentation of contact allergy. The classification is based primarily on the mode of clinical presentation. The categories are direct exposure/contact dermatitis, mimicking or exacerbation of preexisting eczema, multifactorial dermatitis including allergic contact dermatitis, by proxy, mimicking angioedema, airborne contact dermatitis, photo-induced contact dermatitis, systemic contact dermatitis, noneczematous contact dermatitis, contact urticaria, protein contact dermatitis, respiratory/mucosal symptoms, oral contact dermatitis, erythroderma/exfoliative dermatitis, minor forms of presentation, and extracutaneous manifestations. PMID:27608064

  9. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  10. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  11. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  12. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  13. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  14. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  15. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  16. Assessing exposure in epidemiologic studies to disinfection by-products in drinking water: report from an international workshop.

    PubMed Central

    Arbuckle, Tye E; Hrudey, Steve E; Krasner, Stuart W; Nuckols, Jay R; Richardson, Susan D; Singer, Philip; Mendola, Pauline; Dodds, Linda; Weisel, Clifford; Ashley, David L; Froese, Kenneth L; Pegram, Rex A; Schultz, Irvin R; Reif, John; Bachand, Annette M; Benoit, Frank M; Lynberg, Michele; Poole, Charles; Waller, Kirsten

    2002-01-01

    The inability to accurately assess exposure has been one of the major shortcomings of epidemiologic studies of disinfection by-products (DBPs) in drinking water. A number of contributing factors include a) limited information on the identity, occurrence, toxicity, and pharmacokinetics of the many DBPs that can be formed from chlorine, chloramine, ozone, and chlorine dioxide disinfection; b) the complex chemical interrelationships between DBPs and other parameters within a municipal water distribution system; and c) difficulties obtaining accurate and reliable information on personal activity and water consumption patterns. In May 2000, an international workshop was held to bring together various disciplines to develop better approaches for measuring DBP exposure for epidemiologic studies. The workshop reached consensus about the clear need to involve relevant disciplines (e.g., chemists, engineers, toxicologists, biostatisticians and epidemiologists) as partners in developing epidemiologic studies of DBPs in drinking water. The workshop concluded that greater collaboration of epidemiologists with water utilities and regulators should be encouraged in order to make regulatory monitoring data more useful for epidemiologic studies. Similarly, exposure classification categories in epidemiologic studies should be chosen to make results useful for regulatory or policy decision making. PMID:11834463

  17. EXPOSURE ANALYSIS

    EPA Science Inventory

    This proceedings chapter will discuss the state-of-the-science regarding the evaluation of exposure as it relates to water quality criteria (WQC), sediment quality guidelines (SQG), and wildlife criteria (WC). Throughout this discussion, attempts are made to identify the methods ...

  18. Hypotonic exposures.

    PubMed

    Flynn, W J; Hill, R M

    1984-03-01

    Even without a contact lens, the cornea can suffer adverse physiological changes from hypotonic exposure, as well as the associated subjective phenomena (e.g., halo and rainbows). The contact lens adds a dimension to this problem that should be viewed against a background of normal (non-wearing) susceptibilities. PMID:6715776

  19. Classification of LiDAR Data with Point Based Classification Methods

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2016-06-01

    LiDAR is one of the most effective systems for 3 dimensional (3D) data collection in wide areas. Nowadays, airborne LiDAR data is used frequently in various applications such as object extraction, 3D modelling, change detection and revision of maps with increasing point density and accuracy. The classification of the LiDAR points is the first step of LiDAR data processing chain and should be handled in proper way since the 3D city modelling, building extraction, DEM generation, etc. applications directly use the classified point clouds. The different classification methods can be seen in recent researches and most of researches work with the gridded LiDAR point cloud. In grid based data processing of the LiDAR data, the characteristic point loss in the LiDAR point cloud especially vegetation and buildings or losing height accuracy during the interpolation stage are inevitable. In this case, the possible solution is the use of the raw point cloud data for classification to avoid data and accuracy loss in gridding process. In this study, the point based classification possibilities of the LiDAR point cloud is investigated to obtain more accurate classes. The automatic point based approaches, which are based on hierarchical rules, have been proposed to achieve ground, building and vegetation classes using the raw LiDAR point cloud data. In proposed approaches, every single LiDAR point is analyzed according to their features such as height, multi-return, etc. then automatically assigned to the class which they belong to. The use of un-gridded point cloud in proposed point based classification process helped the determination of more realistic rule sets. The detailed parameter analyses have been performed to obtain the most appropriate parameters in the rule sets to achieve accurate classes. The hierarchical rule sets were created for proposed Approach 1 (using selected spatial-based and echo-based features) and Approach 2 (using only selected spatial-based features

  20. Remote Sensing Classification Uncertainty: Validating Probabilistic Pixel Level Classification

    NASA Astrophysics Data System (ADS)

    Vrettas, Michail; Cornford, Dan; Bastin, Lucy; Pons, Xavier; Sevillano, Eva; Moré, Gerard; Serra, Pere; Ninyerola, Miquel

    2013-04-01

    There already exists an extensive literature on classification of remotely sensed imagery, and indeed classification more widely, that considers a wide range of probabilistic and non-probabilistic classification methodologies. Although for many probabilistic classification methodologies posterior class probabilities are produced per pixel (observation) these are often not communicated at the pixel level, and typically not validated at the pixel level. Most often the probabilistic classification in converted into a hard classification (of the most probable class) and the accuracy of the resulting classification is reported in terms of a global confusion matrix, or some score derived from this. For applications where classification accuracy is spatially variable and where pixel level estimates of uncertainty can be meaningfully exploited in workflows that propagate uncertainty validating and communicating the pixel level uncertainty opens opportunities for more refined and accountable modelling. In this work we describe our recent work applying and validation of a range of probabilistic classifiers. Using a multi-temporal Landsat data set of the Ebro Delta in Catalonia, which has been carefully radiometrically and geometrically corrected, we present a range of Bayesian classifiers from simple Bayesian linear discriminant analysis to a complex variational Gaussian process based classifier. Field study derived labelled data, classified into 8 classes, which primarily consider land use and the degree of flooding in what is a rice growing region, are used to train the pixel level classifiers. Our focus is not so much on the classification accuracy, but rather the validation of the probabilistic classification made by all methods. We present a range of validation plots and scores, many of which are used for probabilistic weather forecast verification, but are new to remote sensing classification including of course the standard measures of misclassification, but also

  1. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  2. Interactive Classification Technology

    NASA Technical Reports Server (NTRS)

    deBessonet, Cary

    2000-01-01

    The investigators upgraded a knowledge representation language called SL (Symbolic Language) and an automated reasoning system called SMS (Symbolic Manipulation System) to enable the more effective use of the technologies in automated reasoning and interactive classification systems. The overall goals of the project were: 1) the enhancement of the representation language SL to accommodate a wider range of meaning; 2) the development of a default inference scheme to operate over SL notation as it is encoded; and 3) the development of an interpreter for SL that would handle representations of some basic cognitive acts and perspectives.

  3. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    SciTech Connect

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the

  4. A comparative study of PCA, SIMCA and Cole model for classification of bioimpedance spectroscopy measurements.

    PubMed

    Nejadgholi, Isar; Bolic, Miodrag

    2015-08-01

    Due to safety and low cost of bioimpedance spectroscopy (BIS), classification of BIS can be potentially a preferred way of detecting changes in living tissues. However, for longitudinal datasets linear classifiers fail to classify conventional Cole parameters extracted from BIS measurements because of their high variability. In some applications, linear classification based on Principal Component Analysis (PCA) has shown more accurate results. Yet, these methods have not been established for BIS classification, since PCA features have neither been investigated in combination with other classifiers nor have been compared to conventional Cole features in benchmark classification tasks. In this work, PCA and Cole features are compared in three synthesized benchmark classification tasks which are expected to be detected by BIS. These three tasks are classification of before and after geometry change, relative composition change and blood perfusion in a cylindrical organ. Our results show that in all tasks the features extracted by PCA are more discriminant than Cole parameters. Moreover, a pilot study was done on a longitudinal arm BIS dataset including eight subjects and three arm positions. The goal of the study was to compare different methods in arm position classification which includes all three synthesized changes mentioned above. Our comparative study on various classification methods shows that the best classification accuracy is obtained when PCA features are classified by a K-Nearest Neighbors (KNN) classifier. The results of this work suggest that PCA+KNN is a promising method to be considered for classification of BIS datasets that deal with subject and time variability.

  5. Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms

    PubMed Central

    Lindner, Claudia; Wang, Ching-Wei; Huang, Cheng-Ta; Li, Chung-Hsing; Chang, Sheng-Wei; Cootes, Tim F.

    2016-01-01

    Cephalometric tracing is a standard analysis tool for orthodontic diagnosis and treatment planning. The aim of this study was to develop and validate a fully automatic landmark annotation (FALA) system for finding cephalometric landmarks in lateral cephalograms and its application to the classification of skeletal malformations. Digital cephalograms of 400 subjects (age range: 7–76 years) were available. All cephalograms had been manually traced by two experienced orthodontists with 19 cephalometric landmarks, and eight clinical parameters had been calculated for each subject. A FALA system to locate the 19 landmarks in lateral cephalograms was developed. The system was evaluated via comparison to the manual tracings, and the automatically located landmarks were used for classification of the clinical parameters. The system achieved an average point-to-point error of 1.2 mm, and 84.7% of landmarks were located within the clinically accepted precision range of 2.0 mm. The automatic landmark localisation performance was within the inter-observer variability between two clinical experts. The automatic classification achieved an average classification accuracy of 83.4% which was comparable to an experienced orthodontist. The FALA system rapidly and accurately locates and analyses cephalometric landmarks in lateral cephalograms, and has the potential to significantly improve the clinical work flow in orthodontic treatment. PMID:27645567

  6. Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms.

    PubMed

    Lindner, Claudia; Wang, Ching-Wei; Huang, Cheng-Ta; Li, Chung-Hsing; Chang, Sheng-Wei; Cootes, Tim F

    2016-09-20

    Cephalometric tracing is a standard analysis tool for orthodontic diagnosis and treatment planning. The aim of this study was to develop and validate a fully automatic landmark annotation (FALA) system for finding cephalometric landmarks in lateral cephalograms and its application to the classification of skeletal malformations. Digital cephalograms of 400 subjects (age range: 7-76 years) were available. All cephalograms had been manually traced by two experienced orthodontists with 19 cephalometric landmarks, and eight clinical parameters had been calculated for each subject. A FALA system to locate the 19 landmarks in lateral cephalograms was developed. The system was evaluated via comparison to the manual tracings, and the automatically located landmarks were used for classification of the clinical parameters. The system achieved an average point-to-point error of 1.2 mm, and 84.7% of landmarks were located within the clinically accepted precision range of 2.0 mm. The automatic landmark localisation performance was within the inter-observer variability between two clinical experts. The automatic classification achieved an average classification accuracy of 83.4% which was comparable to an experienced orthodontist. The FALA system rapidly and accurately locates and analyses cephalometric landmarks in lateral cephalograms, and has the potential to significantly improve the clinical work flow in orthodontic treatment.

  7. Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms.

    PubMed

    Lindner, Claudia; Wang, Ching-Wei; Huang, Cheng-Ta; Li, Chung-Hsing; Chang, Sheng-Wei; Cootes, Tim F

    2016-01-01

    Cephalometric tracing is a standard analysis tool for orthodontic diagnosis and treatment planning. The aim of this study was to develop and validate a fully automatic landmark annotation (FALA) system for finding cephalometric landmarks in lateral cephalograms and its application to the classification of skeletal malformations. Digital cephalograms of 400 subjects (age range: 7-76 years) were available. All cephalograms had been manually traced by two experienced orthodontists with 19 cephalometric landmarks, and eight clinical parameters had been calculated for each subject. A FALA system to locate the 19 landmarks in lateral cephalograms was developed. The system was evaluated via comparison to the manual tracings, and the automatically located landmarks were used for classification of the clinical parameters. The system achieved an average point-to-point error of 1.2 mm, and 84.7% of landmarks were located within the clinically accepted precision range of 2.0 mm. The automatic landmark localisation performance was within the inter-observer variability between two clinical experts. The automatic classification achieved an average classification accuracy of 83.4% which was comparable to an experienced orthodontist. The FALA system rapidly and accurately locates and analyses cephalometric landmarks in lateral cephalograms, and has the potential to significantly improve the clinical work flow in orthodontic treatment. PMID:27645567

  8. Overview of Classification Systems in Peripheral Artery Disease

    PubMed Central

    Hardman, Rulon L.; Jazaeri, Omid; Yi, J.; Smith, M.; Gupta, Rajan

    2014-01-01

    Peripheral artery disease (PAD), secondary to atherosclerotic disease, is currently the leading cause of morbidity and mortality in the western world. While PAD is common, it is estimated that the majority of patients with PAD are undiagnosed and undertreated. The challenge to the treatment of PAD is to accurately diagnose the symptoms and determine treatment for each patient. The varied presentations of peripheral vascular disease have led to numerous classification schemes throughout the literature. Consistent grading of patients leads to both objective criteria for treating patients and a baseline for clinical follow-up. Reproducible classification systems are also important in clinical trials and when comparing medical, surgical, and endovascular treatment paradigms. This article reviews the various classification systems for PAD and advantages to each system. PMID:25435665

  9. Robust tissue classification for reproducible wound assessment in telemedicine environments

    NASA Astrophysics Data System (ADS)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  10. Land-cover classification in SAR images using dictionary learning

    NASA Astrophysics Data System (ADS)

    Aktaş, Gizem; Bak, Çaǧdaş; Nar, Fatih; Şen, Nigar

    2015-10-01

    Land-cover classification in Synthetic Aperture Radar (SAR) images has significance in both civil and military remote sensing applications. Accurate classification is a challenging problem due to variety of natural and man-made objects, seasonal changes at acquisition time, and diversity of image reconstruction algorithms.. In this study, Feature Preserving Despeckling (FPD), which is an edge preserving total variation based speckle reduction method, is applied as a preprocessing step. To handle the mentioned challenges, a novel feature extraction schema combined with a super-pixel segmentation and dictionary learning based classification is proposed. Computational complexity is another issue to handle in processing of high dimensional SAR images. Computational complexity of the proposed method is linearly proportional to the size of the image since it does not require a sliding window that accesses the pixels multiple times. Accuracy of the proposed method is validated on the dataset composed of TerraSAR-X high resolutions spot mode SAR images.

  11. Waste classification sampling plan

    SciTech Connect

    Landsman, S.D.

    1998-05-27

    The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998.

  12. Mimicking human texture classification

    NASA Astrophysics Data System (ADS)

    van Rikxoort, Eva M.; van den Broek, Egon L.; Schouten, Theo E.

    2005-03-01

    In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was applied with three feature vectors, based on color/gray values, four texture features, and their combination. Second, 18 participants clustered the images using a newly developed card sorting program. The mutual agreement between the participants was 57% and 56% and between the algorithm and the participants it was 47% and 45%, for respectively color and gray-scale texture images. Third, in a benchmark, 30 participants judged the algorithms' clusters with gray-scale textures as more homogeneous then those with colored textures. However, a high interpersonal variability was present for both the color and the gray-scale clusters. So, despite the promising results, it is questionable whether average human texture classification can be mimicked (if it exists at all).

  13. Classification of Physical Activity

    PubMed Central

    Turksoy, Kamuran; Paulino, Thiago Marques Luz; Zaharieva, Dessi P.; Yavelberg, Loren; Jamnik, Veronica; Riddell, Michael C.; Cinar, Ali

    2015-01-01

    Physical activity has a wide range of effects on glucose concentrations in type 1 diabetes (T1D) depending on the type (ie, aerobic, anaerobic, mixed) and duration of activity performed. This variability in glucose responses to physical activity makes the development of artificial pancreas (AP) systems challenging. Automatic detection of exercise type and intensity, and its classification as aerobic or anaerobic would provide valuable information to AP control algorithms. This can be achieved by using a multivariable AP approach where biometric variables are measured and reported to the AP at high frequency. We developed a classification system that identifies, in real time, the exercise intensity and its reliance on aerobic or anaerobic metabolism and tested this approach using clinical data collected from 5 persons with T1D and 3 individuals without T1D in a controlled laboratory setting using a variety of common types of physical activity. The classifier had an average sensitivity of 98.7% for physiological data collected over a range of exercise modalities and intensities in these subjects. The classifier will be added as a new module to the integrated multivariable adaptive AP system to enable the detection of aerobic and anaerobic exercise for enhancing the accuracy of insulin infusion strategies during and after exercise. PMID:26443291

  14. Holistic facial expression classification

    NASA Astrophysics Data System (ADS)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  15. Molecular classification of gliomas.

    PubMed

    Masui, Kenta; Mischel, Paul S; Reifenberger, Guido

    2016-01-01

    The identification of distinct genetic and epigenetic profiles in different types of gliomas has revealed novel diagnostic, prognostic, and predictive molecular biomarkers for refinement of glioma classification and improved prediction of therapy response and outcome. Therefore, the new (2016) World Health Organization (WHO) classification of tumors of the central nervous system breaks with the traditional principle of diagnosis based on histologic criteria only and incorporates molecular markers. This will involve a multilayered approach combining histologic features and molecular information in an "integrated diagnosis". We review the current state of diagnostic molecular markers for gliomas, focusing on isocitrate dehydrogenase 1 or 2 (IDH1/IDH2) gene mutation, α-thalassemia/mental retardation syndrome X-linked (ATRX) gene mutation, 1p/19q co-deletion and telomerase reverse transcriptase (TERT) promoter mutation in adult tumors, as well as v-raf murine sarcoma viral oncogene homolog B1 (BRAF) and H3 histone family 3A (H3F3A) aberrations in pediatric gliomas. We also outline prognostic and predictive molecular markers, including O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation, and discuss the potential clinical relevance of biologic glioblastoma subtypes defined by integration of multiomics data. Commonly used methods for individual marker detection as well as novel large-scale DNA methylation profiling and next-generation sequencing approaches are discussed. Finally, we illustrate how advances in molecular diagnostics affect novel strategies of targeted therapy, thereby raising new challenges and identifying new leads for personalized treatment of glioma patients. PMID:26948350

  16. A Stellar Classification Tool

    NASA Astrophysics Data System (ADS)

    Kattner, S. M.; Glaspey, J.

    2005-12-01

    With the multitude of stellar objects in the sky, we have investigated development of an automated spectral classification system within IRAF to assist in the analysis of small to moderate sized spectroscopic datasets. Using data mining, we extracted 108 standard, sharp-lined B, A, and F stars from the NOAO Digital Library, and measured equivalent widths for 65 prominent lines in the 3000-7000 Angstrom range. Spectral type versus equivalent width intensity was plotted in order to retrieve the lines that demonstrated a clear relationship. For each of the 29 spectral features exhibiting a good correlation between spectral type and line strength, we could fit the data with a polynomial of order three to five. These polynomial fits were then used to predict the spectral types for a separate sample of objects from the NOAO Digital Library. From the comparison of the second data set with the first, we found that several lines could be used for an automated classification system, allowing us good reason to believe that such a system can eventually be established. Kattner's research was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program, which is funded by the National Science Foundation through Scientific Program Order No. 3 (AST-0243875) of the Cooperative Agreement No. AST-0132798 between the Association of Universities for Research in Astronomy (AURA) and the NSF.

  17. PSC: protein surface classification.

    PubMed

    Tseng, Yan Yuan; Li, Wen-Hsiung

    2012-07-01

    We recently proposed to classify proteins by their functional surfaces. Using the structural attributes of functional surfaces, we inferred the pairwise relationships of proteins and constructed an expandable database of protein surface classification (PSC). As the functional surface(s) of a protein is the local region where the protein performs its function, our classification may reflect the functional relationships among proteins. Currently, PSC contains a library of 1974 surface types that include 25,857 functional surfaces identified from 24,170 bound structures. The search tool in PSC empowers users to explore related surfaces that share similar local structures and core functions. Each functional surface is characterized by structural attributes, which are geometric, physicochemical or evolutionary features. The attributes have been normalized as descriptors and integrated to produce a profile for each functional surface in PSC. In addition, binding ligands are recorded for comparisons among homologs. PSC allows users to exploit related binding surfaces to reveal the changes in functionally important residues on homologs that have led to functional divergence during evolution. The substitutions at the key residues of a spatial pattern may determine the functional evolution of a protein. In PSC (http://pocket.uchicago.edu/psc/), a pool of changes in residues on similar functional surfaces is provided.

  18. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  19. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  20. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  1. Use of manual densitometry in land cover classification

    NASA Technical Reports Server (NTRS)

    Jordan, D. C.; Graves, D. H.; Hammetter, M. C.

    1978-01-01

    Through use of manual spot densitometry values derived from multitemporal 1:24,000 color infrared aircraft photography, areas as small as one hectare in the Cumberland Plateau in Kentucky were accurately classified into one of eight ground cover groups. If distinguishing between undisturbed and disturbed forest areas is the sole criterion of interest, classification results are highly accurate if based on imagery taken during foliated ground cover conditions. Multiseasonal imagery analysis was superior to single data analysis, and transparencies from prefoliated conditions gave better separation of conifers and hardwoods than did those from foliated conditions.

  2. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  3. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  4. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  5. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  6. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  7. Exposure chamber

    DOEpatents

    Moss, Owen R.; Briant, James K.

    1983-01-01

    An exposure chamber includes an imperforate casing having a fluid inlet at the top and an outlet at the bottom. A single vertical series of imperforate trays is provided. Each tray is spaced on all sides from the chamber walls. Baffles adjacent some of the trays restrict and direct the flow to give partial flow back and forth across the chambers and downward flow past the lowermost pan adjacent a central plane of the chamber.

  8. Hydrologic Landscape Regionalisation Using Deductive Classification and Random Forests

    PubMed Central

    Brown, Stuart C.; Lester, Rebecca E.; Versace, Vincent L.; Fawcett, Jonathon; Laurenson, Laurie

    2014-01-01

    Landscape classification and hydrological regionalisation studies are being increasingly used in ecohydrology to aid in the management and research of aquatic resources. We present a methodology for classifying hydrologic landscapes based on spatial environmental variables by employing non-parametric statistics and hybrid image classification. Our approach differed from previous classifications which have required the use of an a priori spatial unit (e.g. a catchment) which necessarily results in the loss of variability that is known to exist within those units. The use of a simple statistical approach to identify an appropriate number of classes eliminated the need for large amounts of post-hoc testing with different number of groups, or the selection and justification of an arbitrary number. Using statistical clustering, we identified 23 distinct groups within our training dataset. The use of a hybrid classification employing random forests extended this statistical clustering to an area of approximately 228,000 km2 of south-eastern Australia without the need to rely on catchments, landscape units or stream sections. This extension resulted in a highly accurate regionalisation at both 30-m and 2.5-km resolution, and a less-accurate 10-km classification that would be more appropriate for use at a continental scale. A smaller case study, of an area covering 27,000 km2, demonstrated that the method preserved the intra- and inter-catchment variability that is known to exist in local hydrology, based on previous research. Preliminary analysis linking the regionalisation to streamflow indices is promising suggesting that the method could be used to predict streamflow behaviour in ungauged catchments. Our work therefore simplifies current classification frameworks that are becoming more popular in ecohydrology, while better retaining small-scale variability in hydrology, thus enabling future attempts to explain and visualise broad-scale hydrologic trends at the scale of

  9. Hydrologic landscape regionalisation using deductive classification and random forests.

    PubMed

    Brown, Stuart C; Lester, Rebecca E; Versace, Vincent L; Fawcett, Jonathon; Laurenson, Laurie

    2014-01-01

    Landscape classification and hydrological regionalisation studies are being increasingly used in ecohydrology to aid in the management and research of aquatic resources. We present a methodology for classifying hydrologic landscapes based on spatial environmental variables by employing non-parametric statistics and hybrid image classification. Our approach differed from previous classifications which have required the use of an a priori spatial unit (e.g. a catchment) which necessarily results in the loss of variability that is known to exist within those units. The use of a simple statistical approach to identify an appropriate number of classes eliminated the need for large amounts of post-hoc testing with different number of groups, or the selection and justification of an arbitrary number. Using statistical clustering, we identified 23 distinct groups within our training dataset. The use of a hybrid classification employing random forests extended this statistical clustering to an area of approximately 228,000 km2 of south-eastern Australia without the need to rely on catchments, landscape units or stream sections. This extension resulted in a highly accurate regionalisation at both 30-m and 2.5-km resolution, and a less-accurate 10-km classification that would be more appropriate for use at a continental scale. A smaller case study, of an area covering 27,000 km2, demonstrated that the method preserved the intra- and inter-catchment variability that is known to exist in local hydrology, based on previous research. Preliminary analysis linking the regionalisation to streamflow indices is promising suggesting that the method could be used to predict streamflow behaviour in ungauged catchments. Our work therefore simplifies current classification frameworks that are becoming more popular in ecohydrology, while better retaining small-scale variability in hydrology, thus enabling future attempts to explain and visualise broad-scale hydrologic trends at the scale of

  10. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... process in March 2012 (77 FR 5379). When verified by a futures classification, Smith-Doxey data serves as...; ] DEPARTMENT OF AGRICULTURE Agricultural Marketing Service 7 CFR Part 27 RIN 0581-AD33 Cotton Futures... for the addition of an optional cotton futures classification procedure--identified and known...

  11. 78 FR 68983 - Cotton Futures Classification: Optional Classification Procedure

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-18

    ...-Doxey data into the cotton futures classification process in March 2012 (77 FR 5379). When verified by a... October 9, 2013 (78 FR 54970). AMS received two comments: one from a national trade organization... Agricultural Marketing Service 7 CFR Part 27 RIN 0581-AD33 Cotton Futures Classification:...

  12. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  13. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  14. Reconstruction-classification method for quantitative photoacoustic tomography.

    PubMed

    Malone, Emma; Powell, Samuel; Cox, Ben T; Arridge, Simon

    2015-01-01

    We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in two and three dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches. PMID:26662815

  15. SPIDERz: SuPport vector classification for IDEntifying Redshifts

    NASA Astrophysics Data System (ADS)

    Jones, Evan; Singal, J.

    2016-08-01

    SPIDERz (SuPport vector classification for IDEntifying Redshifts) applies powerful support vector machine (SVM) optimization and statistical learning techniques to custom data sets to obtain accurate photometric redshift (photo-z) estimations. It is written for the IDL environment and can be applied to traditional data sets consisting of photometric band magnitudes, or alternatively to data sets with additional galaxy parameters (such as shape information) to investigate potential correlations between the extra galaxy parameters and redshift.

  16. Reconstruction-classification method for quantitative photoacoustic tomography.

    PubMed

    Malone, Emma; Powell, Samuel; Cox, Ben T; Arridge, Simon

    2015-01-01

    We propose a combined reconstruction-classification method for simultaneously recovering absorption and scattering in turbid media from images of absorbed optical energy. This method exploits knowledge that optical parameters are determined by a limited number of classes to iteratively improve their estimate. Numerical experiments show that the proposed approach allows for accurate recovery of absorption and scattering in two and three dimensions, and delivers superior image quality with respect to traditional reconstruction-only approaches.

  17. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  18. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  19. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  20. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  1. 32 CFR 2001.14 - Classification challenges.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification challenges. 2001.14 Section 2001... Classification § 2001.14 Classification challenges. (a) Challenging classification. Authorized holders, including authorized holders outside the classifying agency, who want to challenge the classification status...

  2. 43 CFR 2461.1 - Proposed classifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Proposed classifications. 2461.1 Section... MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.1 Proposed classifications. (a) Proposed classifications...

  3. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the...

  4. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Changing classifications. 2461.4 Section... MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  5. 7 CFR 27.34 - Classification procedure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification procedure. 27.34 Section 27.34... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.34 Classification procedure. Classification shall proceed as rapidly as possible, but...

  6. 43 CFR 2461.1 - Proposed classifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Proposed classifications. 2461.1 Section... MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.1 Proposed classifications. (a) Proposed classifications...

  7. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Changing classifications. 2461.4 Section... MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  8. 7 CFR 27.34 - Classification procedure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification procedure. 27.34 Section 27.34... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.34 Classification procedure. Classification shall proceed as rapidly as possible, but...

  9. 43 CFR 2461.1 - Proposed classifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Proposed classifications. 2461.1 Section... MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.1 Proposed classifications. (a) Proposed classifications...

  10. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Derivative classification. 9.6 Section 9.6... classification. (a) Definition. Derivative classification is the incorporating, paraphrasing, restating...

  11. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the...

  12. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Derivative classification. 9.6 Section 9.6... classification. (a) Definition. Derivative classification is the incorporating, paraphrasing, restating...

  13. 32 CFR 2001.14 - Classification challenges.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classification challenges. 2001.14 Section 2001... Classification § 2001.14 Classification challenges. (a) Challenging classification. Authorized holders, including authorized holders outside the classifying agency, who want to challenge the classification status...

  14. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Derivative classification. 9.6 Section 9.6... classification. (a) Definition. Derivative classification is the incorporating, paraphrasing, restating...

  15. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the...

  16. 7 CFR 27.34 - Classification procedure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification procedure. 27.34 Section 27.34... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.34 Classification procedure. Classification shall proceed as rapidly as possible, but...

  17. 5 CFR 9701.221 - Classification requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Classification requirements. 9701.221... HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.221 Classification.... (d) Classification decisions become effective on the date designated by the authorized...

  18. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Changing classifications. 2461.4 Section... MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  19. 7 CFR 27.34 - Classification procedure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification procedure. 27.34 Section 27.34... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.34 Classification procedure. Classification shall proceed as rapidly as possible, but...

  20. 32 CFR 2001.14 - Classification challenges.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification challenges. 2001.14 Section 2001... Classification § 2001.14 Classification challenges. (a) Challenging classification. Authorized holders, including authorized holders outside the classifying agency, who want to challenge the classification status...

  1. 5 CFR 9901.221 - Classification requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Classification requirements. 9901.221... SECURITY PERSONNEL SYSTEM (NSPS) Classification Classification Process § 9901.221 Classification... jobs as he or she considers necessary for the purpose of this section. (d) A classification action...

  2. 32 CFR 2400.15 - Classification guides.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification guides. 2400.15 Section 2400.15... Derivative Classification § 2400.15 Classification guides. (a) OSTP shall issue and maintain classification guides to facilitate the proper and uniform derivative classification of information. These guides...

  3. 32 CFR 2400.15 - Classification guides.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification guides. 2400.15 Section 2400.15... Derivative Classification § 2400.15 Classification guides. (a) OSTP shall issue and maintain classification guides to facilitate the proper and uniform derivative classification of information. These guides...

  4. 5 CFR 9701.221 - Classification requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Classification requirements. 9701.221... HUMAN RESOURCES MANAGEMENT SYSTEM Classification Classification Process § 9701.221 Classification.... (d) Classification decisions become effective on the date designated by the authorized...

  5. 32 CFR 2001.14 - Classification challenges.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classification challenges. 2001.14 Section 2001... Classification § 2001.14 Classification challenges. (a) Challenging classification. Authorized holders, including authorized holders outside the classifying agency, who want to challenge the classification status...

  6. 22 CFR 9.6 - Derivative classification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Derivative classification. 9.6 Section 9.6... classification. (a) Definition. Derivative classification is the incorporating, paraphrasing, restating...

  7. 32 CFR 2001.14 - Classification challenges.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification challenges. 2001.14 Section 2001... Classification § 2001.14 Classification challenges. (a) Challenging classification. Authorized holders, including authorized holders outside the classifying agency, who want to challenge the classification status...

  8. 14 CFR 1203.412 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Classification guides. 1203.412 Section... PROGRAM Guides for Original Classification § 1203.412 Classification guides. (a) General. A classification guide, based upon classification determinations made by appropriate program and...

  9. 32 CFR 2400.15 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2400.15 Section 2400.15... Derivative Classification § 2400.15 Classification guides. (a) OSTP shall issue and maintain classification guides to facilitate the proper and uniform derivative classification of information. These guides...

  10. ALHAMBRA survey: morphological classification

    NASA Astrophysics Data System (ADS)

    Pović, M.; Huertas-Company, M.; Márquez, I.; Masegosa, J.; Aguerri, J. A. López; Husillos, C.; Molino, A.; Cristóbal-Hornillos, D.

    2015-03-01

    The Advanced Large Homogeneous Area Medium Band Redshift Astronomical (ALHAMBRA) survey is a photometric survey designed to study systematically cosmic evolution and cosmic variance (Moles et al. 2008). It employs 20 continuous medium-band filters (3500 - 9700 Å), plus JHK near-infrared (NIR) bands, which enable measurements of photometric redshifts with good accuracy. ALHAMBRA covers > 4 deg2 in eight discontinuous regions (~ 0.5 deg2 per region), of theseseven fields overlap with other extragalactic, multiwavelength surveys (DEEP2, SDSS, COSMOS, HDF-N, Groth, ELAIS-N1). We detect > 600.000 sources, reaching the depth of R(AB) ~ 25.0, and photometric accuracy of 2-4% (Husillos et al., in prep.). Photometric redshifts are measured using the Bayesian Photometric Redshift (BPZ) code (Benítez et al. 2000), reaching one of the best accuracies up to date of δz/z <= 1.2% (Molino et al., in prep.). To deal with the morphological classification of galaxies in the ALHAMBRA survey (Pović et al., in prep.), we used the galaxy Support Vector Machine code (galSVM; Huertas-Company 2008, 2009), one of the new non-parametric methods for morphological classification, specially useful when dealing with low resolution and high-redshift data. To test the accuracy of our morphological classification we used a sample of 3000 local, visually classified galaxies (Nair & Abraham 2010), moving them to conditions typical of our ALHAMBRA data (taking into account the background, redshift and magnitude distributions, etc.), and measuring their morphology using galSVM. Finally, we measured the morphology of ALHAMBRA galaxies, obtaining for each source seven morphological parameters (two concentration indexes, asymmetry, Gini, M20 moment of light, smoothness, and elongation), probability if the source belongs to early- or late-type, and its error. Comparing ALHAMBRA morph COSMOS/ACS morphology (obtained with the same method) we expect to have qualitative separation in two main morphological

  11. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  12. Retinal connectomics: towards complete, accurate networks.

    PubMed

    Marc, Robert E; Jones, Bryan W; Watt, Carl B; Anderson, James R; Sigulinsky, Crystal; Lauritzen, Scott

    2013-11-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 10(12)-10(15) byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies of complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  13. Classification of personality disorder.

    PubMed

    Tyrer, P; Alexander, J

    1979-08-01

    An interview schedule was used to record the personality traits of 130 psychiatric patients, 65 with a primary clinical diagnosis of personality disorder and 65 with other diagnoses. The results were analysed by factor analysis and three types of cluster analysis. Factor analysis showed a similar structure of personality variables in both groups of patients, supporting the notion that personality disorders differ only in degree from the personalities of other psychiatric patients. Cluster analysis revealed five discrete categories; sociopathic, passive-dependent, anankastic, schizoid and a non-personality-disordered group. Of all the personality-disordered patients 63 per cent fell into the passive-dependent or sociopathic category. The results suggest that the current classification of personality disorder could be simplified. PMID:497619

  14. Bayesian classification theory

    NASA Technical Reports Server (NTRS)

    Hanson, Robin; Stutz, John; Cheeseman, Peter

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.

  15. Phenomenological Classification of Depressive Disorders.

    ERIC Educational Resources Information Center

    Overall, John E.; Hollister, Leo E.

    1980-01-01

    Empirical cluster analysis methods have been the basis for development of a phenomenological classification of depressive disorders. Verbal descriptions of the distinguishing features of anxious, hostile, agitated, and retarded subtypes of depression are provided to facilitate development of clinical classification concepts. A review of validity…

  16. A New Classification of Sandstone.

    ERIC Educational Resources Information Center

    Brewer, Roger Clay; And Others

    1990-01-01

    Introduced is a sandstone classification scheme intended for use with thin-sections and hand specimens. Detailed is a step-by-step classification scheme. A graphic presentation of the scheme is presented. This method is compared with other existing schemes. (CW)

  17. Classification of Rainbows

    NASA Astrophysics Data System (ADS)

    Ricard, J. L.; Peter, A. L.; Barckicke, J.

    2015-12-01

    CLASSIFICATION OF RAINBOWS Jean Louis Ricard,1,2,* Peter Adams ,2 and Jean Barckicke 2,3 1CNRM, Météo-France,42 Avenue Gaspard Coriolis, 31057 Toulouse, France 2CEPAL, 148 Himley Road, Dudley, West Midlands DY1 2QH, United Kingdom 3DP/Compas,Météo-France,42 Avenue Gaspard Coriolis, 31057 Toulouse, France *Corresponding author: Dr_Jean_Ricard@yahoo,co,ukRainbows are the most beautiful and most spectacular optical atmospheric phenomenon. Humphreys (1964) pointedly noted that "the "explanations" generally given of the rainbow [ in textbooks] may well be said to explain beautifully that which does not occur, and to leave unexplained which does" . . . "The records of close observations of rainbows soon show that not even the colors are always the same". Textbooks stress that the main factor affecting the aspect of the rainbow is the radius of the water droplets. In his well-known textbook entitled "the nature of light & colour in the open air", Minnaert (1954) gives the chief features of the rainbow depending on the diameter of the drops producing it. For this study, we have gathered hundreds of pictures of primary bows. We sort out the pictures into classes. The classes are defined in a such way that rainbows belonging to the same class look similar. Our results are surprising and do not confirm Minnaert's classification. In practice, the size of the water droplets is only a minor factor controlling the overall aspect of the rainbow. The main factor appears to be the height of the sun above the horizon. At sunset, the width of the red band increases, while the width of the other bands of colours decreases. The orange, the violet, the blue and the green bands disappear completely in this order. At the end, the primary bow is mainly red and slightly yellow. Picture = Contrast-enhanced photograph of a primary bow picture (prepared by Andrew Dunn).

  18. Robotic Rock Classification

    NASA Technical Reports Server (NTRS)

    Hebert, Martial

    1999-01-01

    This report describes a three-month research program undertook jointly by the Robotics Institute at Carnegie Mellon University and Ames Research Center as part of the Ames' Joint Research Initiative (JRI.) The work was conducted at the Ames Research Center by Mr. Liam Pedersen, a graduate student in the CMU Ph.D. program in Robotics under the supervision Dr. Ted Roush at the Space Science Division of the Ames Research Center from May 15 1999 to August 15, 1999. Dr. Martial Hebert is Mr. Pedersen's research adviser at CMU and is Principal Investigator of this Grant. The goal of this project is to investigate and implement methods suitable for a robotic rover to autonomously identify rocks and minerals in its vicinity, and to statistically characterize the local geological environment. Although primary sensors for these tasks are a reflection spectrometer and color camera, the goal is to create a framework under which data from multiple sensors, and multiple readings on the same object, can be combined in a principled manner. Furthermore, it is envisioned that knowledge of the local area, either a priori or gathered by the robot, will be used to improve classification accuracy. The key results obtained during this project are: The continuation of the development of a rock classifier; development of theoretical statistical methods; development of methods for evaluating and selecting sensors; and experimentation with data mining techniques on the Ames spectral library. The results of this work are being applied at CMU, in particular in the context of the Winter 99 Antarctica expedition in which the classification techniques will be used on the Nomad robot. Conversely, the software developed based on those techniques will continue to be made available to NASA Ames and the data collected from the Nomad experiments will also be made available.

  19. Vietnamese Document Representation and Classification

    NASA Astrophysics Data System (ADS)

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.

  20. [Classification of viruses by computer].

    PubMed

    Ageeva, O N; Andzhaparidze, O G; Kibardin, V M; Nazarova, G M; Pleteneva, E A

    1982-01-01

    The study used the information mass containing information on 83 viruses characterized by 41 markers. The suitability of one of the variants of cluster analysis for virus classification was demonstrated. It was established that certain stages of automatic allotment of viruses into groups by the degree of similarity of their properties end the formation of groups which consist of viruses sufficiently close to each other by their properties and are sufficiently isolated. Comparison of these groups with the classification proposed by the ICVT established their correspondence to individual families. Analysis of the obtained classification system permits sufficiently grounded conclusions to be drawn with regard to the classification position of certain viruses, the classification of which has not yet been completed by the ICVT.

  1. Applications of feature selection. [development of classification algorithms for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1976-01-01

    The use of satellite-acquired (LANDSAT) multispectral scanner (MSS) data to conduct an inventory of some crop of economic interest such as wheat over a large geographical area is considered in relation to the development of accurate and efficient algorithms for data classification. The dimension of the measurement space and the computational load for a classification algorithm is increased by the use of multitemporal measurements. Feature selection/combination techniques used to reduce the dimensionality of the problem are described.

  2. Automatic classification of time-variable X-ray sources

    SciTech Connect

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  3. Multiclass cancer classification based on gene expression comparison

    PubMed Central

    Yang, Sitan; Naiman, Daniel Q.

    2016-01-01

    As the complexity and heterogeneity of cancer is being increasingly appreciated through genomic analyses, microarray-based cancer classification comprising multiple discriminatory molecular markers is an emerging trend. Such multiclass classification problems pose new methodological and computational challenges for developing novel and effective statistical approaches. In this paper, we introduce a new approach for classifying multiple disease states associated with cancer based on gene expression profiles. Our method focuses on detecting small sets of genes in which the relative comparison of their expression values leads to class discrimination. For an m-class problem, the classification rule typically depends on a small number of m-gene sets, which provide transparent decision boundaries and allow for potential biological interpretations. We first test our approach on seven common gene expression datasets and compare it with popular classification methods including support vector machines and random forests. We then consider an extremely large cohort of leukemia cancer to further assess its effectiveness. In both experiments, our method yields comparable or even better results to benchmark classifiers. In addition, we demonstrate that our approach can integrate pathway analysis of gene expression to provide accurate and biological meaningful classification. PMID:24918456

  4. Automatic Classification of Time-variable X-Ray Sources

    NASA Astrophysics Data System (ADS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ~97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7-500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  5. Criminal exposure.

    PubMed

    1999-09-01

    In August, an HIV-positive man plead guilty to sexually assaulting a 14-year-old boy. The sleeping boy awoke to find [name removed] sexually assaulting him, while watching a pornographic video. [Name removed] plead guilty to the assault with intent to rape a child. In addition, [name removed] received three counts of indecent assault and battery on a child, and exposure of pornographic material to a minor. [Name removed] will remain on probation for five years, although the prosecution had recommended sentencing [name removed] to four or five years in prison. The boy continues to be tested for HIV.

  6. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  7. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  8. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  9. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  10. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  11. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  12. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  13. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  14. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  15. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  16. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  17. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  18. Classification of 1H MR spectra of human brain neoplasms: the influence of preprocessing and computerized consensus diagnosis on classification accuracy.

    PubMed

    Somorjai, R L; Dolenko, B; Nikulin, A K; Pizzi, N; Scarth, G; Zhilkin, P; Halliday, W; Fewer, D; Hill, N; Ross, I; West, M; Smith, I C; Donnelly, S M; Kuesel, A C; Brière, K M

    1996-01-01

    We study how classification accuracy can be improved when both different data preprocessing methods and computerized consensus diagnosis (CCD) are applied to 1H magnetic resonance (MR) spectra of astrocytomas, meningiomas, and epileptic brain tissue. The MR spectra (360 MHz, 37 degrees C) of tissue specimens (biopsies) from subjects with meningiomas (95; 26 cases), astrocytomas (74; 26 cases), and epilepsy (37; 8 cases) were preprocessed by several methods. Each data set was partitioned into training and validation sets. Robust classification was carried out via linear discriminant analysis (LDA), artificial neural nets (NN), and CCD, and the results were compared with histopathological diagnosis of the MR specimens. Normalization of the relevant spectral regions affects classification accuracy significantly. The spectra-based average three-class classification accuracies of LDA and NN increased from 81.7% (unnormalized data sets) to 89.9% (normalized). CCD increased the classification accuracy of the normalized sets to an average of 91.8%. CCD invariably decreases the fraction of unclassifiable spectra. The same trends prevail, with improved results, for case-based classification. Preprocessing the 1H MR spectra is essential for accurate and reliable classification of astrocytomas, meningiomas, and nontumorous epileptic brain tissue. CCD improves classification accuracy, with an attendant decrease in the fraction of unclassifiable spectra or cases.

  19. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  20. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  1. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  2. The molecular classification of hereditary endocrine diseases.

    PubMed

    Ye, Lei; Ning, Guang

    2015-12-01

    Hereditary endocrine diseases are an important group of diseases with great heterogeneity. The current classification for hereditary endocrine disease is mostly based upon anatomy, which is helpful for pathophysiological interpretation, but does not address the pathogenic variability associated with different underlying genetic causes. Identification of an endocrinopathy-associated genetic alteration provides evidence for differential diagnosis, discovery of non-classical disease, and the potential for earlier diagnosis and targeted therapy. Molecular diagnosis should be routinely applied when managing patients with suspicion of hereditary disease. To enhance the accurate diagnosis and treatment of patients with hereditary endocrine diseases, we propose categorization of endocrine diseases into three groups based upon the function of the mutant gene: cell differentiation, hormone synthesis and action, and tumorigenesis. Each category was further grouped according to the specific gene function. We believe that this format would facilitate practice of precision medicine in the field of hereditary endocrine diseases.

  3. [The study of M dwarf spectral classification].

    PubMed

    Yi, Zhen-Ping; Pan, Jing-Chang; Luo, A-Li

    2013-08-01

    As the most common stars in the galaxy, M dwarfs can be used to trace the structure and evolution of the Milky Way. Besides, investigating M dwarfs is important for searching for habitability of extrasolar planets orbiting M dwarfs. Spectral classification of M dwarfs is a fundamental work. The authors used DR7 M dwarf sample of SLOAN to extract important features from the range of 600-900 nm by random forest method. Compared to the features used in Hammer Code, the authors added three new indices. Our test showed that the improved Hammer with new indices is more accurate. Our method has been applied to classify M dwarf spectra of LAMOST. PMID:24159887

  4. Fusing Heterogeneous Data for Alzheimer's Disease Classification.

    PubMed

    Pillai, Parvathy Sudhir; Leong, Tze-Yun

    2015-01-01

    In multi-view learning, multimodal representations of a real world object or situation are integrated to learn its overall picture. Feature sets from distinct data sources carry different, yet complementary, information which, if analysed together, usually yield better insights and more accurate results. Neuro-degenerative disorders such as dementia are characterized by changes in multiple biomarkers. This work combines the features from neuroimaging and cerebrospinal fluid studies to distinguish Alzheimer's disease patients from healthy subjects. We apply statistical data fusion techniques on 101 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. We examine whether fusion of biomarkers helps to improve diagnostic accuracy and how the methods compare against each other for this problem. Our results indicate that multimodal data fusion improves classification accuracy. PMID:26262148

  5. Monitoring phthalate exposure in humans.

    PubMed

    Latini, Giuseppe

    2005-11-01

    The dialkyl- or alkyl/aryl esters of 1,2-benzenedicarboxylic acid, commonly known as phthalates, are high-production-volume synthetic chemicals and ubiquitous environmental contaminants because of their use in plastics and other common consumer products. Di-(2-ethylhexyl) phthalate (DEHP) is the most abundant phthalate in the environment. Humans are exposed to these compounds through ingestion, inhalation, and dermal exposure for their whole lifetime, since the intrauterine life. Public and scientific concern has increased in recent years about the potential health risks associated with exposure to phthalates. The main focus has moved away from the hepatotoxic effects to the endocrine disrupting potency of these chemicals. To date, although the consistent toxicologic data on phthalates is suggestive, information on sources and pathways of human exposure to phthalates is limited. Recently, exposure to phthalates has been assessed by analyzing urine for their metabolites. This approach is contrary to the determination of the parent phthalates in air, water and foodstuff and not subject to contamination. Furthermore, these metabolites and the parent phthalates are considered the toxic species. However, accurate methods and models for measuring the amount of phthalates absorbed by the various pathways of exposure have to be developed. In fact, a frequent biological monitoring of phthalates in body fluids and tissues would be highly advisable, both in helping physicians to perform health risk assessments for exposure in the general population and in guiding governments to provide regulations concerning the maximum allowed concentrations in the environment, plasticized products, medications and medical equipment.

  6. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  7. Outcomes of traumatic exposure.

    PubMed

    Stoddard, Frederick J

    2014-04-01

    There is a great need to recognize, prevent, reduce, or treat the immediate and long-term effects of childhood trauma. Most children affected by trauma will not develop long-term posttraumatic sequelae due to their resilience, but comorbid psychopathological outcomes occur and are more common after exposure to severe traumatic events. Factors influencing posttraumatic outcomes are numerous. Young dependent children tend to be more susceptible than older children; children with pain or injury are also more susceptible. Psychopathological effects may not be evident until adulthood. Awareness of the range of adverse outcomes underscores the importance of preventive interventions, accurate assessment, diagnosis and where possible, treatment. Advocacy and public policy initiatives are essential to improving outcomes.

  8. 75 FR 70754 - Postal Classification Changes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... Postal Classification Changes AGENCY: Postal Regulatory Commission. ACTION: Notice. SUMMARY: The Commission is noticing a recently-filed Postal Service request announcing a classification change affecting... Notice with the Commission announcing a classification change ] established by the Governors.\\1\\...

  9. 75 FR 10529 - Mail Classification Change

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-08

    ... Mail Classification Change AGENCY: Postal Regulatory Commission. ACTION: Notice. SUMMARY: The... Classification Schedule. The change affects a change in terminology. This notice addresses procedural steps....90 et seq. concerning a change in classification which reflects a change in terminology from...

  10. 12 CFR 1777.20 - Capital classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... date of section 1365 of the 1992 Act. The capital classification of an Enterprise for purposes of... classification. (c) Capital classifications before the effective date of section 1365 of the 1992...

  11. 12 CFR 1777.20 - Capital classifications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... date of section 1365 of the 1992 Act. The capital classification of an Enterprise for purposes of... classification. (c) Capital classifications before the effective date of section 1365 of the 1992...

  12. 12 CFR 1777.20 - Capital classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... date of section 1365 of the 1992 Act. The capital classification of an Enterprise for purposes of... classification. (c) Capital classifications before the effective date of section 1365 of the 1992...

  13. 12 CFR 1777.20 - Capital classifications.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... date of section 1365 of the 1992 Act. The capital classification of an Enterprise for purposes of... classification. (c) Capital classifications before the effective date of section 1365 of the 1992...

  14. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  15. Explosives Classifications Tracking System User Manual

    SciTech Connect

    Genoni, R.P.

    1993-10-01

    The Explosives Classification Tracking System (ECTS) presents information and data for U.S. Department of Energy (DOE) explosives classifications of interest to EM-561, Transportation Management Division, other DOE facilities, and contractors. It is intended to be useful to the scientist, engineer, and transportation professional, who needs to classify or transport explosives. This release of the ECTS reflects upgrading of the software which provides the user with an environment that makes comprehensive retrieval of explosives related information quick and easy. Quarterly updates will be provided to the ECTS throughout its development in FY 1993 and thereafter. The ECTS is a stand alone, single user system that contains unclassified, publicly available information, and administrative information (contractor names, product descriptions, transmittal dates, EX-Numbers, etc.) information from many sources for non-decisional engineering and shipping activities. The data is the most up-to-date and accurate available to the knowledge of the system developer. The system is designed to permit easy revision and updating as new information and data become available. These, additions and corrections are welcomed by the developer. This user manual is intended to help the user install, understand, and operate the system so that the desired information may be readily obtained, reviewed, and reported.

  16. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  17. [Why an accurate diet for employees].

    PubMed

    Baccolo, T P; Gagliardi, D; Marchetti, M R

    2010-01-01

    A study leaded in 2005 by the ILO on diet habits in different countries pointed out that poor diet at the workplace (leading to malnutrition or overweight and obesity) costs up to 20% & in lost productivity. Obesity is a major cause for absenteeism and can modify physiologic and immune responses to neurotoxins and chemical agents. Obese subjects show a higher risk to develop cardiovascular diseases, musculoskeletal disorders, due to exposure to vibrations, etc; quite often these workers are discriminated, are more sensitive to work-related stress and might experience a reduced self-esteem. Obesity can cause relevant working handicaps linked to reduction of agility, to early fatigue and to difficulties in identifying and use of suitable PPE. As a consequence, obese workers show a higher rate of work accidents and may receive some restrictions in the fitness assessment carried out by the occupational health physician during periodical examinations. PMID:21438227

  18. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  19. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    PubMed

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result. PMID:26611006

  20. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    PubMed

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result.

  1. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  2. Lead exposure in US worksites: A literature review and development of an occupational lead exposure database from the published literature

    PubMed Central

    Koh, Dong-Hee; Locke, Sarah J.; Chen, Yu-Cheng; Purdue, Mark P.; Friesen, Melissa C.

    2016-01-01

    Background Retrospective exposure assessment of occupational lead exposure in population-based studies requires historical exposure information from many occupations and industries. Methods We reviewed published US exposure monitoring studies to identify lead exposure measurement data. We developed an occupational lead exposure database from the 175 identified papers containing 1,111 sets of lead concentration summary statistics (21% area air, 47% personal air, 32% blood). We also extracted ancillary exposure-related information, including job, industry, task/location, year collected, sampling strategy, control measures in place, and sampling and analytical methods. Results Measurements were published between 1940 and 2010 and represented 27 2-digit standardized industry classification codes. The majority of the measurements were related to lead-based paint work, joining or cutting metal using heat, primary and secondary metal manufacturing, and lead acid battery manufacturing. Conclusions This database can be used in future statistical analyses to characterize differences in lead exposure across time, jobs, and industries. PMID:25968240

  3. Formal apparatus of soil classification

    NASA Astrophysics Data System (ADS)

    Rozhkov, V. A.

    2011-12-01

    Mathematical tools that may be applied for soil classification purposes are discussed. They include the evaluation of information contained in particular soil attributes, the grouping of soil objects into a given (automatically determined) number of classes, the optimization of the classification decisions, and the development of the models and rules (algorithms) used to classify soil objects. The algorithms of multivariate statistical methods and cluster analysis used for solving these problems are described. The major attention is paid to the development of the systems of informative attributes of soil objects and their classes and to the assessment of the quality of the classification decisions. Particular examples of the solution of the problems of soil classification with the use of formal mathematical methods are given. It is argued that the theoretical and practical problems of classification in science cannot find objective solutions without the application of the modern methods of information analysis. The major problems of the numerical taxonomy of the soil objects described in this paper and the appropriate software tools for their solution should serve as the basis for the creation of not only formal soil classification systems but also the theory of soil classification.

  4. Accurate masses for dispersion-supported galaxies

    NASA Astrophysics Data System (ADS)

    Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.

    2010-08-01

    We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.

  5. EVALUATING EXCESS DIETARY EXPOSURE OF YOUNG CHILDREN EATING IN CONTAMINATED ENVIRONMENTS

    EPA Science Inventory

    The United States' Food Quality Protection Act of 1996 requires more accurate assessment of children's aggregate exposures to environmental contaminants. Since children have unstructured eating behaviors, their excess exposures, caused by eating activities, becomes an importan...

  6. Environmental endocrine disruptors: A proposed classification scheme

    SciTech Connect

    Fur, P.L. de; Roberts, J.

    1995-12-31

    A number of chemicals known to act on animal systems through the endocrine system have been termed environmental endocrine disruptors. This group includes some of the PCBs and TCDDs, as well as lead, mercury and a large number of pesticides. The common feature is that the chemicals interact with endogenous endocrine systems at the cellular and/or molecular level to alter normal processes that are controlled or regulated by hormones. Although the existence of artificial or environmental estrogens (e.g. chlordecone and DES) has been known for some time, recent data indicate that this phenomenon is widespread. Indeed, anti-androgens have been held responsible for reproductive dysfunction in alligator populations in Florida. But the significance of endocrine disruption was recognized by pesticide manufacturers when insect growth regulators were developed to interfere with hormonal control of growth. Controlling, regulating or managing these chemicals depends in no small part on the ability to identify, screen or otherwise know that a chemical is an endocrine disrupter. Two possible classifications schemes are: using the effects caused in an animal, or animals as an exposure indicator; and using a known screen for the point of contact with the animal. The former would require extensive knowledge of cause and effect relationships in dozens of animal groups; the latter would require a screening tool comparable to an estrogen binding assay. The authors present a possible classification based on chemicals known to disrupt estrogenic, androgenic and ecdysone regulated hormonal systems.

  7. Manganese exposure, essentiality & toxicity.

    PubMed

    Santamaria, A B

    2008-10-01

    Manganese (Mn) is an essential element present in all living organisms and is naturally present in rocks, soil, water, and food. Exposure to high oral, parenteral, or ambient air concentrations of Mn can result in elevations in Mn tissue levels and neurological effects. However, current understanding of the impact of Mn exposure on the nervous system leads to the hypothesis that there should be no adverse effects at low exposures, because Mn is an essential element; therefore, there should be some threshold for exposure above which adverse effects may occur and adverse effects may increase in frequency with higher exposures beyond that threshold. Data gaps regarding Mn neurotoxicity include what the clinical significance is of the neurobehavioural, neuropsychological, or neurological endpoints measured in many of the occupational studies that have evaluated cohorts exposed to relatively low levels of Mn. Specific early biomarkers of effect, such as subclinical neurobehavioural or neurological changes or magnetic resonance imaging (MRI) changes have not been established or validated for Mn, although some studies have attempted to correlate biomarkers with neurological effects. Experimental studies with rodents and monkeys provide valuable information about the absorption, bioavailability, and tissue distribution of various Mn compounds with different solubilities and oxidation states in different age groups. Studies have shown that rodents and primates maintain stable tissue manganese levels as a result of homeostatic mechanisms that tightly regulate absorption and excretion. In addition, physiologically based pharmacokinetic (PBPK) models are being developed to provide for the ability to conduct route-to-route extrapolations, evaluate nasal uptake to the CNS, and evaluate lifestage differences in Mn pharmacokinetics. Such models will facilitate more rigorous quantitative analysis of the available pharmacokinetic data for Mn and will be used to identify situations

  8. Towards Automatic Classification of Neurons

    PubMed Central

    Armañanzas, Rubén; Ascoli, Giorgio A.

    2015-01-01

    The classification of neurons into types has been much debated since the inception of modern neuroscience. Recent experimental advances are accelerating the pace of data collection. The resulting information growth of morphological, physiological, and molecular properties encourages efforts to automate neuronal classification by powerful machine learning techniques. We review state-of-the-art analysis approaches and availability of suitable data and resources, highlighting prominent challenges and opportunities. The effective solution of the neuronal classification problem will require continuous development of computational methods, high-throughput data production, and systematic metadata organization to enable cross-lab integration. PMID:25765323

  9. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  10. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  11. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  12. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  13. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  14. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  15. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  16. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  17. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  18. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  19. The neuron classification problem

    PubMed Central

    Bota, Mihail; Swanson, Larry W.

    2007-01-01

    A systematic account of neuron cell types is a basic prerequisite for determining the vertebrate nervous system global wiring diagram. With comprehensive lineage and phylogenetic information unavailable, a general ontology based on structure-function taxonomy is proposed and implemented in a knowledge management system, and a prototype analysis of select regions (including retina, cerebellum, and hypothalamus) presented. The supporting Brain Architecture Knowledge Management System (BAMS) Neuron ontology is online and its user interface allows queries about terms and their definitions, classification criteria based on the original literature and “Petilla Convention” guidelines, hierarchies, and relations—with annotations documenting each ontology entry. Combined with three BAMS modules for neural regions, connections between regions and neuron types, and molecules, the Neuron ontology provides a general framework for physical descriptions and computational modeling of neural systems. The knowledge management system interacts with other web resources, is accessible in both XML and RDF/OWL, is extendible to the whole body, and awaits large-scale data population requiring community participation for timely implementation. PMID:17582506

  20. Classification of Rainbows

    NASA Astrophysics Data System (ADS)

    Adams, Peter; Ricard, Jean; Barckicke, Jean

    2016-04-01

    Rainbows are the most beautiful and most spectacular optical atmospheric phenomenon. Humphreys (1964) pointedly noted that "the "explanations" generally given of the rainbow [ in textbooks] may well be said to explain beautifully that which does not occur, and to leave unexplained which does" . . . "The records of close observations of rainbows soon show that not even the colors are always the same". Textbooks stress that the main factor affecting the aspect of the rainbow is the radius of the water droplets. In his well-known textbook entitled "the nature of light & colour in the open air", Minnaert (1954) gives the chief features of the rainbow depending on the diameter of the drops producing it. For this study, we have gathered hundreds of pictures of primary bows. We sort out the pictures into classes. The classes are defined in a such way that rainbows belonging to the same class look similar. Our results are surprising and do not confirm Minnaert's classification. In practice, the size of the water droplets is only a minor factor controlling the overall aspect of the rainbow. The main factor appears to be the height of the sun above the horizon. At sunset, the width of the red band increases, while the width of the other bands of colours decreases. The orange, the violet, the blue and the green bands disappear completely in this order. At the end, the primary bow is mainly red and slightly yellow. Picture = taken from the CNRM in Toulouse after a summer storm (Jean Ricard)

  1. CASP9 Target Classification

    PubMed Central

    Kinch, Lisa N.; Shi, Shuoyong; Cheng, Hua; Cong, Qian; Pei, Jimin; Mariani, Valerio; Schwede, Torsten; Grishin, Nick V.

    2011-01-01

    The Critical Assessment of Protein Structure Prediction round 9 (CASP9) aimed to evaluate predictions for 129 experimentally determined protein structures. To assess tertiary structure predictions, these target structures were divided into domain-based evaluation units that were then classified into two assessment categories: template based modeling (TBM) and template free modeling (FM). CASP9 targets were split into domains of structurally compact evolutionary modules. For the targets with more than one defined domain, the decision to split structures into domains for evaluation was based on server performance. Target domains were categorized based on their evolutionary relatedness to existing templates as well as their difficulty levels indicated by server performance. Those target domains with sequence-related templates and high server prediction performance were classified as TMB, while those targets without identifiable templates and low server performance were classified as FM. However, using these generalizations for classification resulted in a blurred boundary between CASP9 assessment categories. Thus, the FM category included those domains without sequence detectable templates (25 target domains) as well as some domains with difficult to detect templates whose predictions were as poor as those without templates (5 target domains). Several interesting examples are discussed, including targets with sequence related templates that exhibit unusual structural differences, targets with homologous or analogous structure templates that are not detectable by sequence, and targets with new folds. PMID:21997778

  2. Classification of Fuel Types Using Envisat Data

    NASA Astrophysics Data System (ADS)

    Wozniak, Edyta; Nasilowska, Sylwia

    2010-12-01

    Forest fires have an important impact on landscape structure and ecosystems biodiversity. Moreover, wild land fires have strong influence on forest planning and management. Furthermore, forest fires affect not only woodworking industry but also arable fields and inhabitants life too. A precise knowledge of the spatial distribution of fuels is necessary to predict, analyse and model fire behaviour. Modelling of fire spread is difficult and complicated because it depends on many factors. First of all, it depends on undergrowth and brushwood moisture and thickness, and tree species. There are many fuel types classification developed for regional environmental condition. The main drawback of implemented systems is utility for particular region of interest. That causes a need of permanent, consequent and more accurate researches in specific habitat not only in continental scale. In this paper a new system is proposed. It organizes fuels into three major groups (coniferous, deciduous wood and open) and four subcategories which describes a fuel structure (trees lower then 4m, trees higher than 4 m: without bushes; with low bushes lower them 2m; with high bushes higher then 2m). This classification is adapted into Polish lowlands environmental condition. The classification was carried out on the base of 120 training plots, which were determinate during a field experiment in north-eastern Poland. The plots discriminate homogeneous parts of forest which correspond to fuel classes. In the study we used the ENVISAT Alternating Polarization (HH/HV) image. The most popular classifiers were tried out and the maximum likelihood method resulted the most efficient. To map fuel types many methods are employed. The use of remote sensing systems gives the possibility of low- costs and time-consuming fuels mapping and updating. The employ of SAR systems permits mapping independently of weather condition. The microwave data has the potential to estimate fuel loads and map fuel types. The

  3. Ocean acoustic hurricane classification.

    PubMed

    Wilson, Joshua D; Makris, Nicholas C

    2006-01-01

    Theoretical and empirical evidence are combined to show that underwater acoustic sensing techniques may be valuable for measuring the wind speed and determining the destructive power of a hurricane. This is done by first developing a model for the acoustic intensity and mutual intensity in an ocean waveguide due to a hurricane and then determining the relationship between local wind speed and underwater acoustic intensity. From this it is shown that it should be feasible to accurately measure the local wind speed and classify the destructive power of a hurricane if its eye wall passes directly over a single underwater acoustic sensor. The potential advantages and disadvantages of the proposed acoustic method are weighed against those of currently employed techniques. PMID:16454274

  4. Ocean acoustic hurricane classification.

    PubMed

    Wilson, Joshua D; Makris, Nicholas C

    2006-01-01

    Theoretical and empirical evidence are combined to show that underwater acoustic sensing techniques may be valuable for measuring the wind speed and determining the destructive power of a hurricane. This is done by first developing a model for the acoustic intensity and mutual intensity in an ocean waveguide due to a hurricane and then determining the relationship between local wind speed and underwater acoustic intensity. From this it is shown that it should be feasible to accurately measure the local wind speed and classify the destructive power of a hurricane if its eye wall passes directly over a single underwater acoustic sensor. The potential advantages and disadvantages of the proposed acoustic method are weighed against those of currently employed techniques.

  5. Classification method based on KCCA

    NASA Astrophysics Data System (ADS)

    Wang, Zhanqing; Zhang, Guilin; Zhao, Guangzhou

    2007-11-01

    Nonlinear CCA extends the linear CCA in that it operates in the kernel space and thus implies the nonlinear combinations in the original space. This paper presents a classification method based on the kernel canonical correlation analysis (KCCA). We introduce the probabilistic label vectors (PLV) for a give pattern which extend the conventional concept of class label, and investigate the correlation between feature variables and PLV variables. A PLV predictor is presented based on KCCA, and then classification is performed on the predicted PLV. We formulate a frame for classification by integrating class information through PLV. Experimental results on Iris data set classification and facial expression recognition show the efficiencies of the proposed method.

  6. CLASSIFICATION FRAMEWORK FOR COASTAL SYSTEMS

    EPA Science Inventory

    U.S. Environmental Protection Agency. Classification Framework for Coastal Systems. EPA/600/R-04/061. U.S. Environmental Protection Agency, National Health and Environmental Effects Research Laboratory, Atlantic Ecology Division, Narragansett, RI, Gulf Ecology Division, Gulf Bree...

  7. Spectroscopic classification of supernova candidates

    NASA Astrophysics Data System (ADS)

    Hodgkin, S. T.; Hall, A.; Fraser, M.; Campbell, H.; Wyrzykowski, L.; Kostrzewa-Rutkowska, Z.; Pietro, N.

    2014-09-01

    We report the spectroscopic classification of four supernovae at the 2.5m Isaac Newton Telescope on La Palma, using the Intermediate Dispersion Spectrograph and the R300V grating (3500-8000 Ang; ~6 Ang resolution).

  8. [Current diagnosis and procedure classification].

    PubMed

    Graubner, B

    2000-01-01

    Classifications of diagnoses and medical procedures support structurized medical documentation and make it accessible to computerized evaluation. Important figures of performance of in- and outpatient medical care can be derived from these data. Actually the importance of medical classifications increases in Germany because of the decision for introducing a new accounting system for hospitals on the basis of Australian AR-DRGs. The following review describes in detail these medical classifications which are prescribed by law in Germany (ICD-10 respectively ICD-10-SGBV and OP-301 [ICPM]). Some of their most important predecessors are also being referred to and discussed in the context of medical documentation. Some ideas are given about the further development of these medical classifications (e.g. PCS) and the medical documentation, taking into account especially the future DRG situation in hospitals.

  9. [Definition and classification of epilepsy].

    PubMed

    Jibiki, Itsuki

    2014-05-01

    The concept or definition of epilepsy was mentioned as a chronic disease of the brain consisting of repetitions of EEG paroxysm and clinical seizures caused by excessive discharges of the cerebral neurons, in reference with Gastaut's opinion and the other statements. Further, we referred to diseases to be excluded from epilepsy such as isolated, occasional and subclinical seizures and so on. Next, new classifications of seizures and epilepsies were explained on the basis of revised terminology and concepts for organization of seizures and epilepsies in Report of the ILAE Communication in Classification and Terminology, 2005-09, in comparison with the Classification of Epileptic Seizures in 1981 and the Classification of Epilepsies and Epileptic Syndromes in 1989.

  10. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  11. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  12. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  13. Accurate Thermal Conductivities from First Principles

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian

    2015-03-01

    In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).

  14. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  15. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  16. Metabolism of pesticides after dermal exposure to amphibians

    EPA Science Inventory

    Understanding how pesticide exposure to non-target species influences toxicity is necessary to accurately assess the ecological risks these compounds pose. Aquatic, terrestrial, and arboreal amphibians are often exposed to pesticides during their agricultural application resultin...

  17. Successional stage of biological soil crusts: an accurate indicator of ecohydrological condition

    USGS Publications Warehouse

    Belnap, Jayne; Wilcox, Bradford P.; Van Scoyoc, Matthew V.; Phillips, Susan L.

    2013-01-01

    Biological soil crusts are a key component of many dryland ecosystems. Following disturbance, biological soil crusts will recover in stages. Recently, a simple classification of these stages has been developed, largely on the basis of external features of the crusts, which reflects their level of development (LOD). The classification system has six LOD classes, from low (1) to high (6). To determine whether the LOD of a crust is related to its ecohydrological function, we used rainfall simulation to evaluate differences in infiltration, runoff, and erosion among crusts in the various LODs, across a range of soil depths and with different wetting pre-treatments. We found large differences between the lowest and highest LODs, with runoff and erosion being greatest from the lowest LOD. Under dry antecedent conditions, about 50% of the water applied ran off the lowest LOD plots, whereas less than 10% ran off the plots of the two highest LODs. Similarly, sediment loss was 400 g m-2 from the lowest LOD and almost zero from the higher LODs. We scaled up the results from these simulations using the Rangeland Hydrology and Erosion Model. Modelling results indicate that erosion increases dramatically as slope length and gradient increase, especially beyond the threshold values of 10 m for slope length and 10% for slope gradient. Our findings confirm that the LOD classification is a quick, easy, nondestructive, and accurate index of hydrological condition and should be incorporated in field and modelling assessments of ecosystem health.

  18. A new classification of necrophilia.

    PubMed

    Aggrawal, Anil

    2009-08-01

    Necrophilia is a paraphilia whereby the perpetrator gets sexual pleasure in having sex with the dead. Most jurisdictions and nations have laws against this practice. Necrophilia exists in many variations, and some authors have attempted to classify necrophilia. However many related terms such as pseudonecrophilia continue being used differently by different authors, necessitating the introduction of a new classification system. The classification system suggested by the author attempts to put all different shades of necrophilia under 10 classes.

  19. Classification of the acanthocephala.

    PubMed

    Amin, Omar M

    2013-09-01

    In 1985, Amin presented a new system for the classification of the Acanthocephala in Crompton and Nickol's (1985) book 'Biology of the Acanthocephala' and recognized the concepts of Meyer (1931, 1932, 1933) and Van Cleave (1936, 1941, 1947, 1948, 1949, 1951, 1952). This system became the standard for the taxonomy of this group and remains so to date. Many changes have taken place and many new genera and species, as well as higher taxa, have been described since. An updated version of the 1985 scheme incorporating new concepts in molecular taxonomy, gene sequencing and phylogenetic studies is presented. The hierarchy has undergone a total face lift with Amin's (1987) addition of a new class, Polyacanthocephala (and a new order and family) to remove inconsistencies in the class Palaeacanthocephala. Amin and Ha (2008) added a third order (and a new family) to the Palaeacanthocephala, Heteramorphida, which combines features from the palaeacanthocephalan families Polymorphidae and Heteracanthocephalidae. Other families and subfamilies have been added but some have been eliminated, e.g. the three subfamilies of Arythmacanthidae: Arhythmacanthinae Yamaguti, 1935; Neoacanthocephaloidinae Golvan, 1960; and Paracanthocephaloidinae Golvan, 1969. Amin (1985) listed 22 families, 122 genera and 903 species (4, 4 and 14 families; 13, 28 and 81 genera; 167, 167 and 569 species in Archiacanthocephala, Eoacanthocephala and Palaeacanthocephala, respectively). The number of taxa listed in the present treatment is 26 families (18% increase), 157 genera (29%), and 1298 species (44%) (4, 4 and 16; 18, 29 and 106; 189, 255 and 845, in the same order), which also includes 1 family, 1 genus and 4 species in the class Polyacanthocephala Amin, 1987, and 3 genera and 5 species in the fossil family Zhijinitidae.

  20. Classification of the acanthocephala.

    PubMed

    Amin, Omar M

    2013-09-01

    In 1985, Amin presented a new system for the classification of the Acanthocephala in Crompton and Nickol's (1985) book 'Biology of the Acanthocephala' and recognized the concepts of Meyer (1931, 1932, 1933) and Van Cleave (1936, 1941, 1947, 1948, 1949, 1951, 1952). This system became the standard for the taxonomy of this group and remains so to date. Many changes have taken place and many new genera and species, as well as higher taxa, have been described since. An updated version of the 1985 scheme incorporating new concepts in molecular taxonomy, gene sequencing and phylogenetic studies is presented. The hierarchy has undergone a total face lift with Amin's (1987) addition of a new class, Polyacanthocephala (and a new order and family) to remove inconsistencies in the class Palaeacanthocephala. Amin and Ha (2008) added a third order (and a new family) to the Palaeacanthocephala, Heteramorphida, which combines features from the palaeacanthocephalan families Polymorphidae and Heteracanthocephalidae. Other families and subfamilies have been added but some have been eliminated, e.g. the three subfamilies of Arythmacanthidae: Arhythmacanthinae Yamaguti, 1935; Neoacanthocephaloidinae Golvan, 1960; and Paracanthocephaloidinae Golvan, 1969. Amin (1985) listed 22 families, 122 genera and 903 species (4, 4 and 14 families; 13, 28 and 81 genera; 167, 167 and 569 species in Archiacanthocephala, Eoacanthocephala and Palaeacanthocephala, respectively). The number of taxa listed in the present treatment is 26 families (18% increase), 157 genera (29%), and 1298 species (44%) (4, 4 and 16; 18, 29 and 106; 189, 255 and 845, in the same order), which also includes 1 family, 1 genus and 4 species in the class Polyacanthocephala Amin, 1987, and 3 genera and 5 species in the fossil family Zhijinitidae. PMID:24261131

  1. [Oligodendrogliomas: historical background of classifications].

    PubMed

    Nataf, F; Tucker, M-L; Varlet, P; Koziak, M; Beuvon, F; Daumas-Duport, C; Roux, F-X

    2005-09-01

    The story of the classifications for gliomas is related to the development of the techniques used for cytological and histological examination of brain parenchyma. After a review of these techniques and the progressive discovery of the central nervous system cell types, the main classifications are presented. The first classification is due to Bailey and Cushing in 1926. It was based on histoembryogenetic theory. Then Kernohan introduced, in 1938, the concept of anaplasia. The WHO classification was published in 1979, then revised in 1993 and 2000. It took into account some data from both previous systems and introduced gradually the notion of histological criteria of malignancy. More recently; molecular genetics data and clinical evolution were retained. The Sainte-Anne classification for oligodendrogliomas is based on both histological and imaging data. It includes the notion of spatial histological structure of oligodendrogliomas. Contrast enhancement is closely related to endotheliocapillary hyperplasia. Gliomas classifications are changing and confusions can be made because of lack of reproductibility and misinterpretations of samples. PMID:16292165

  2. Classification of Salivary Gland Neoplasms.

    PubMed

    Bradley, Patrick J

    2016-01-01

    Presently, there is no universal 'working' classification system acceptable to all clinicians involved in the diagnosis and management of patients with salivary gland neoplasms. The most recent World Health Organization Classification of Tumours: Head and Neck Tumours (Salivary Glands) (2005) for benign and malignant neoplasms represents the consensus of current knowledge and is considered the standard pathological classification based on which series should be reported. The TNM classification of salivary gland malignancies has stood the test of time, and using the stage groupings remains the current standard for reporting treated patients' outcomes. Many developments in molecular and genetic methods in the meantime have identified a number of new entities, and new findings for several of the well-established salivary malignancies need to be considered for inclusion in any new classification system. All clinicians involved in the diagnosis, assessment and treatment of patients with salivary gland neoplasms must understand and respect the need for the various classification systems, enabling them to work within a multidisciplinary clinical team environment.

  3. Potential pitfall in using cumulative exposure in exposure-response relationships: demonstration and discussion.

    PubMed

    Finkelstein, M M

    1995-07-01

    Cumulative exposure is frequently used as a measure of exposure in the quantitative analysis of epidemiologic studies. It is recognized that the imposed symmetry between duration and intensity of exposure is a potential problem with this measure, but it is less widely recognized that the finding of an exposure-response relationship, using cumulative exposure as the exposure metric, does not necessarily imply that exposures were accurately or even consistently estimated. This report describes a simulation study drawn from a nested case-control analysis of mesothelioma in a cohort of asbestos cement workers. Intensity of exposure in the range of 0.1-40 fibers/ml was randomly assigned to subjects. Logistic regression analysis demonstrated that there was no association between mesothelioma risk and the randomly assigned intensity of exposure. However, in 171 (86%) of 200 trials, mesothelioma risk was significantly associated with cumulative exposure, even though intensity of exposure remained randomly assigned. A strong exposure-response relationship might thus be misleading. One would be more confident about quantitative risk assessment when there are a large number of independent studies available for analysis.

  4. Exposure chamber

    DOEpatents

    Moss, Owen R.

    1980-01-01

    A chamber for exposing animals, plants, or materials to air containing gases or aerosols is so constructed that catch pans for animal excrement, for example, serve to aid the uniform distribution of air throughout the chamber instead of constituting obstacles as has been the case in prior animal exposure chambers. The chamber comprises the usual imperforate top, bottom and side walls. Within the chamber, cages and their associated pans are arranged in two columns. The pans are spaced horizontally from the walls of the chamber in all directions. Corresponding pans of the two columns are also spaced horizontally from each other. Preferably the pans of one column are also spaced vertically from corresponding pans of the other column. Air is introduced into the top of the chamber and withdrawn from the bottom. The general flow of air is therefore vertical. The effect of the horizontal pans is based on the fact that a gas flowing past the edge of a flat plate that is perpendicular to the flow forms a wave on the upstream side of the plate. Air flows downwardly between the chamber walls and the outer edges of the pan. It also flows downwardly between the inner edges of the pans of the two columns. It has been found that when the air carries aerosol particles, these particles are substantially uniformly distributed throughout the chamber.

  5. 49 CFR 8.17 - Classification challenges.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 1 2012-10-01 2012-10-01 false Classification challenges. 8.17 Section 8.17 Transportation Office of the Secretary of Transportation CLASSIFIED INFORMATION: CLASSIFICATION/DECLASSIFICATION/ACCESS Classification/Declassification of Information § 8.17 Classification challenges. (a)...

  6. 32 CFR 2400.6 - Classification levels.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classification levels. 2400.6 Section 2400.6... Original Classification § 2400.6 Classification levels. (a) National security information (hereinafter... three authorized classification levels, such as “Secret Sensitive” or “Agency Confidential.” The...

  7. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters...

  8. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Classification schedules. 2..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Classification § 2.85 Classification schedules. (a) International classification system. Section 6.1 of this chapter sets forth the...

  9. 7 CFR 51.1402 - Size classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classification. 51.1402 Section 51.1402... Classification § 51.1402 Size classification. Size of pecans may be specified in connection with the grade in accordance with one of the following classifications. To meet the requirements for any one of...

  10. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification...

  11. 22 CFR 9.8 - Classification challenges.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of...

  12. 32 CFR 2400.9 - Classification requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification requirements. 2400.9 Section 2400... PROGRAM Original Classification § 2400.9 Classification requirements. (a) Information may be classified... below, and an official having original classification authority determines that its...

  13. 43 CFR 2461.2 - Classifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Classifications. 2461.2 Section 2461.2..., DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.2 Classifications. Not less than 60 days after publication of...

  14. 43 CFR 2410.1 - All classifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false All classifications. 2410.1 Section 2410.1..., DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) CRITERIA FOR ALL LAND CLASSIFICATIONS General Criteria § 2410.1 All classifications. All classifications under the regulations of this part will give...

  15. 32 CFR 2001.15 - Classification guides.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of...

  16. 49 CFR 8.17 - Classification challenges.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false Classification challenges. 8.17 Section 8.17 Transportation Office of the Secretary of Transportation CLASSIFIED INFORMATION: CLASSIFICATION/DECLASSIFICATION/ACCESS Classification/Declassification of Information § 8.17 Classification challenges. (a)...

  17. 32 CFR 2400.6 - Classification levels.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification levels. 2400.6 Section 2400.6... Original Classification § 2400.6 Classification levels. (a) National security information (hereinafter... three authorized classification levels, such as “Secret Sensitive” or “Agency Confidential.” The...

  18. 43 CFR 2410.1 - All classifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false All classifications. 2410.1 Section 2410.1..., DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) CRITERIA FOR ALL LAND CLASSIFICATIONS General Criteria § 2410.1 All classifications. All classifications under the regulations of this part will give...

  19. 32 CFR 2001.15 - Classification guides.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of...

  20. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classification. 51.2284 Section 51.2284...) Size Requirements § 51.2284 Size classification. The following classifications are provided to describe... of kernels in the lot shall conform to the requirements of the specified classification as...

  1. 32 CFR 2400.6 - Classification levels.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification levels. 2400.6 Section 2400.6... Original Classification § 2400.6 Classification levels. (a) National security information (hereinafter... three authorized classification levels, such as “Secret Sensitive” or “Agency Confidential.” The...

  2. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification guides. 2700.22 Section 2700.22... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall issue classification guides pursuant to section 2-2 of E.O. 12065. These guides, which shall be used...

  3. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification guides. 2700.22 Section 2700.22... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall issue classification guides pursuant to section 2-2 of E.O. 12065. These guides, which shall be used...

  4. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classification. 51.2284 Section 51.2284... Size classification. The following classifications are provided to describe the size of any lot... shall conform to the requirements of the specified classification as defined below: (a) Halves....

  5. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification...

  6. 43 CFR 2461.2 - Classifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Classifications. 2461.2 Section 2461.2..., DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.2 Classifications. Not less than 60 days after publication of...

  7. 22 CFR 9.8 - Classification challenges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of...

  8. 32 CFR 2001.15 - Classification guides.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of...

  9. 43 CFR 2461.2 - Classifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Classifications. 2461.2 Section 2461.2..., DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) BUREAU INITIATED CLASSIFICATION SYSTEM Multiple-Use Classification Procedures § 2461.2 Classifications. Not less than 60 days after publication of...

  10. 32 CFR 2400.9 - Classification requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classification requirements. 2400.9 Section 2400... PROGRAM Original Classification § 2400.9 Classification requirements. (a) Information may be classified... below, and an official having original classification authority determines that its...

  11. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Classification schedules. 2..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Classification § 2.85 Classification schedules. (a) International classification system. Section 6.1 of this chapter sets forth the...

  12. 7 CFR 51.1402 - Size classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classification. 51.1402 Section 51.1402... Classification § 51.1402 Size classification. Size of pecans may be specified in connection with the grade in accordance with one of the following classifications. To meet the requirements for any one of...

  13. 22 CFR 9.8 - Classification challenges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of...

  14. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classifications. 51.2836 Section 51.2836...-Granex-Grano and Creole Types) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation...

  15. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one...

  16. 32 CFR 2001.15 - Classification guides.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of...

  17. 22 CFR 42.11 - Classification symbols.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Classification symbols. 42.11 Section 42.11... NATIONALITY ACT, AS AMENDED Classification and Foreign State Chargeability § 42.11 Classification symbols. A... visa symbol to show the classification of the alien. Immigrants Symbol Class Section of law...

  18. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classification. 51.2284 Section 51.2284... Size classification. The following classifications are provided to describe the size of any lot... shall conform to the requirements of the specified classification as defined below: (a) Halves....

  19. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification...

  20. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Classification schedules. 2..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Classification § 2.85 Classification schedules. (a) International classification system. Section 6.1 of this chapter sets forth the...

  1. 49 CFR 8.17 - Classification challenges.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 1 2014-10-01 2014-10-01 false Classification challenges. 8.17 Section 8.17 Transportation Office of the Secretary of Transportation CLASSIFIED INFORMATION: CLASSIFICATION/DECLASSIFICATION/ACCESS Classification/Declassification of Information § 8.17 Classification challenges. (a)...

  2. 22 CFR 9.8 - Classification challenges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of...

  3. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification...

  4. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one...

  5. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one...

  6. 45 CFR 601.2 - Classification authority.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 3 2013-10-01 2013-10-01 false Classification authority. 601.2 Section 601.2... CLASSIFICATION AND DECLASSIFICATION OF NATIONAL SECURITY INFORMATION § 601.2 Classification authority. The Foundation does not have original classification authority under Executive Order 12958. In any instance...

  7. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classification guides. 2700.22 Section 2700.22... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall issue classification guides pursuant to section 2-2 of E.O. 12065. These guides, which shall be used...

  8. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Classification schedules. 2..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Classification § 2.85 Classification schedules. (a) International classification system. Section 6.1 of this chapter sets forth the...

  9. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one...

  10. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classification guides. 2700.22 Section 2700.22... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall issue classification guides pursuant to section 2-2 of E.O. 12065. These guides, which shall be used...

  11. 49 CFR 8.17 - Classification challenges.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 1 2011-10-01 2011-10-01 false Classification challenges. 8.17 Section 8.17 Transportation Office of the Secretary of Transportation CLASSIFIED INFORMATION: CLASSIFICATION/DECLASSIFICATION/ACCESS Classification/Declassification of Information § 8.17 Classification challenges. (a)...

  12. 43 CFR 2410.1 - All classifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false All classifications. 2410.1 Section 2410.1..., DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) CRITERIA FOR ALL LAND CLASSIFICATIONS General Criteria § 2410.1 All classifications. All classifications under the regulations of this part will give...

  13. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classification. 51.2284 Section 51.2284...) Size Requirements § 51.2284 Size classification. The following classifications are provided to describe... of kernels in the lot shall conform to the requirements of the specified classification as...

  14. 43 CFR 2461.4 - Changing classifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Changing classifications. 2461.4 Section 2461.4 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND... Multiple-Use Classification Procedures § 2461.4 Changing classifications. Classifications may be...

  15. 37 CFR 2.85 - Classification schedules.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Classification schedules. 2..., DEPARTMENT OF COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Classification § 2.85 Classification schedules. (a) International classification system. Section 6.1 of this chapter sets forth the...

  16. 45 CFR 601.5 - Derivative classification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Derivative classification. 601.5 Section 601.5... CLASSIFICATION AND DECLASSIFICATION OF NATIONAL SECURITY INFORMATION § 601.5 Derivative classification. Distinct from “original” classification is the determination that information is in substance the same...

  17. 28 CFR 524.73 - Classification procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classification procedures. 524.73 Section..., CLASSIFICATION, AND TRANSFER CLASSIFICATION OF INMATES Central Inmate Monitoring (CIM) System § 524.73 Classification procedures. (a) Initial assignment. Except as provided for in paragraphs (a) (1) through (4)...

  18. 32 CFR 2700.22 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2700.22 Section 2700.22... SECURITY INFORMATION REGULATIONS Derivative Classification § 2700.22 Classification guides. OMSN shall issue classification guides pursuant to section 2-2 of E.O. 12065. These guides, which shall be used...

  19. 32 CFR 2001.10 - Classification standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification standards. 2001.10 Section 2001... Classification § 2001.10 Classification standards. Identifying or describing damage to the national security. Section 1.1(a) of the Order specifies the conditions that must be met when making classification...

  20. 45 CFR 601.2 - Classification authority.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Classification authority. 601.2 Section 601.2... CLASSIFICATION AND DECLASSIFICATION OF NATIONAL SECURITY INFORMATION § 601.2 Classification authority. The Foundation does not have original classification authority under Executive Order 12958. In any instance...

  1. 32 CFR 2400.9 - Classification requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification requirements. 2400.9 Section 2400... PROGRAM Original Classification § 2400.9 Classification requirements. (a) Information may be classified... below, and an official having original classification authority determines that its...

  2. 22 CFR 9.8 - Classification challenges.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Classification challenges. 9.8 Section 9.8 Foreign Relations DEPARTMENT OF STATE GENERAL SECURITY INFORMATION REGULATIONS § 9.8 Classification... classification status is improper are expected and encouraged to challenge the classification status of...

  3. 28 CFR 345.20 - Position classification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Position classification. 345.20 Section... INDUSTRIES (FPI) INMATE WORK PROGRAMS Position Classification § 345.20 Position classification. (a) Inmate... the objectives and principles of pay classification as a part of the routine orientation of new...

  4. 7 CFR 28.911 - Review classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one...

  5. 49 CFR 8.17 - Classification challenges.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Classification challenges. 8.17 Section 8.17 Transportation Office of the Secretary of Transportation CLASSIFIED INFORMATION: CLASSIFICATION/DECLASSIFICATION/ACCESS Classification/Declassification of Information § 8.17 Classification challenges. (a)...

  6. 32 CFR 2001.15 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of...

  7. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classification. 51.2284 Section 51.2284... Size classification. The following classifications are provided to describe the size of any lot... shall conform to the requirements of the specified classification as defined below: (a) Halves....

  8. 32 CFR 2400.6 - Classification levels.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification levels. 2400.6 Section 2400.6... Original Classification § 2400.6 Classification levels. (a) National security information (hereinafter... three authorized classification levels, such as “Secret Sensitive” or “Agency Confidential.” The...

  9. Classification of US hydropower dams by their modes of operation

    DOE PAGES

    McManamay, Ryan A.; Oigbokie, II, Clement O.; Kao, Shih -Chieh; Bevelhimer, Mark S.

    2016-02-19

    A key challenge to understanding ecohydrologic responses to dam regulation is the absence of a universally transferable classification framework for how dams operate. In the present paper, we develop a classification system to organize the modes of operation (MOPs) for U.S. hydropower dams and powerplants. To determine the full diversity of MOPs, we mined federal documents, open-access data repositories, and internet sources. W then used CART classification trees to predict MOPs based on physical characteristics, regulation, and project generation. Finally, we evaluated how much variation MOPs explained in sub-daily discharge patterns for stream gages downstream of hydropower dams. After reviewingmore » information for 721 dams and 597 power plants, we developed a 2-tier hierarchical classification based on 1) the storage and control of flows to powerplants, and 2) the presence of a diversion around the natural stream bed. This resulted in nine tier-1 MOPs representing a continuum of operations from strictly peaking, to reregulating, to run-of-river, and two tier-2 MOPs, representing diversion and integral dam-powerhouse configurations. Although MOPs differed in physical characteristics and energy production, classification trees had low accuracies (<62%), which suggested accurate evaluations of MOPs may require individual attention. MOPs and dam storage explained 20% of the variation in downstream subdaily flow characteristics and showed consistent alterations in subdaily flow patterns from reference streams. Lastly, this standardized classification scheme is important for future research including estimating reservoir operations for large-scale hydrologic models and evaluating project economics, environmental impacts, and mitigation.« less

  10. CREST--classification resources for environmental sequence tags.

    PubMed

    Lanzén, Anders; Jørgensen, Steffen L; Huson, Daniel H; Gorfer, Markus; Grindhaug, Svenn Helge; Jonassen, Inge; Øvreås, Lise; Urich, Tim

    2012-01-01

    Sequencing of taxonomic or phylogenetic markers is becoming a fast and efficient method for studying environmental microbial communities. This has resulted in a steadily growing collection of marker sequences, most notably of the small-subunit (SSU) ribosomal RNA gene, and an increased understanding of microbial phylogeny, diversity and community composition patterns. However, to utilize these large datasets together with new sequencing technologies, a reliable and flexible system for taxonomic classification is critical. We developed CREST (Classification Resources for Environmental Sequence Tags), a set of resources and tools for generating and utilizing custom taxonomies and reference datasets for classification of environmental sequences. CREST uses an alignment-based classification method with the lowest common ancestor algorithm. It also uses explicit rank similarity criteria to reduce false positives and identify novel taxa. We implemented this method in a web server, a command line tool and the graphical user interfaced program MEGAN. Further, we provide the SSU rRNA reference database and taxonomy SilvaMod, derived from the publicly available SILVA SSURef, for classification of sequences from bacteria, archaea and eukaryotes. Using cross-validation and environmental datasets, we compared the performance of CREST and SilvaMod to the RDP Classifier. We also utilized Greengenes as a reference database, both with CREST and the RDP Classifier. These analyses indicate that CREST performs better than alignment-free methods with higher recall rate (sensitivity) as well as precision, and with the ability to accurately identify most sequences from novel taxa. Classification using SilvaMod performed better than with Greengenes, particularly when applied to environmental sequences. CREST is freely available under a GNU General Public License (v3) from http://apps.cbu.uib.no/crest and http://lcaclassifier.googlecode.com. PMID:23145153

  11. The IC3D Classification of the Corneal Dystrophies

    PubMed Central

    Weiss, Jayne S.; Møller, H. U.; Lisch, Walter; Kinoshita, Shigeru; Aldave, Anthony J.; Belin, Michael W.; Kivelä, Tero; Busin, Massimo; Munier, Francis L.; Seitz, Berthold; Sutphin, John; Bredrup, Cecilie; Mannis, Mark J.; Rapuano, Christopher J.; Van Rij, Gabriel; Kim, Eung Kweon; Klintworth, Gordon K.

    2010-01-01

    Background The recent availability of genetic analyses has demonstrated the shortcomings of the current phenotypic method of corneal dystrophy classification. Abnormalities in different genes can cause a single phenotype, whereas different defects in a single gene can cause different phenotypes. Some disorders termed corneal dystrophies do not appear to have a genetic basis. Purpose The purpose of this study was to develop a new classification system for corneal dystrophies, integrating up-to-date information on phenotypic description, pathologic examination, and genetic analysis. Methods The International Committee for Classification of Corneal Dystrophies (IC3D) was created to devise a current and accurate nomenclature. Results This anatomic classification continues to organize dystrophies according to the level chiefly affected. Each dystrophy has a template summarizing genetic, clinical, and pathologic information. A category number from 1 through 4 is assigned, reflecting the level of evidence supporting the existence of a given dystrophy. The most defined dystrophies belong to category 1 (a well-defined corneal dystrophy in which a gene has been mapped and identified and specific mutations are known) and the least defined belong to category 4 (a suspected dystrophy where the clinical and genetic evidence is not yet convincing). The nomenclature may be updated over time as new information regarding the dystrophies becomes available. Conclusions The IC3D Classification of Corneal Dystrophies is a new classification system that incorporates many aspects of the traditional definitions of corneal dystrophies with new genetic, clinical, and pathologic information. Standardized templates provide key information that includes a level of evidence for there being a corneal dystrophy. The system is user-friendly and upgradeable and can be retrieved on the website www.corneasociety.org/ic3d. PMID:19337156

  12. CREST – Classification Resources for Environmental Sequence Tags

    PubMed Central

    Lanzén, Anders; Jørgensen, Steffen L.; Huson, Daniel H.; Gorfer, Markus; Grindhaug, Svenn Helge; Jonassen, Inge; Øvreås, Lise; Urich, Tim

    2012-01-01

    Sequencing of taxonomic or phylogenetic markers is becoming a fast and efficient method for studying environmental microbial communities. This has resulted in a steadily growing collection of marker sequences, most notably of the small-subunit (SSU) ribosomal RNA gene, and an increased understanding of microbial phylogeny, diversity and community composition patterns. However, to utilize these large datasets together with new sequencing technologies, a reliable and flexible system for taxonomic classification is critical. We developed CREST (Classification Resources for Environmental Sequence Tags), a set of resources and tools for generating and utilizing custom taxonomies and reference datasets for classification of environmental sequences. CREST uses an alignment-based classification method with the lowest common ancestor algorithm. It also uses explicit rank similarity criteria to reduce false positives and identify novel taxa. We implemented this method in a web server, a command line tool and the graphical user interfaced program MEGAN. Further, we provide the SSU rRNA reference database and taxonomy SilvaMod, derived from the publicly available SILVA SSURef, for classification of sequences from bacteria, archaea and eukaryotes. Using cross-validation and environmental datasets, we compared the performance of CREST and SilvaMod to the RDP Classifier. We also utilized Greengenes as a reference database, both with CREST and the RDP Classifier. These analyses indicate that CREST performs better than alignment-free methods with higher recall rate (sensitivity) as well as precision, and with the ability to accurately identify most sequences from novel taxa. Classification using SilvaMod performed better than with Greengenes, particularly when applied to environmental sequences. CREST is freely available under a GNU General Public License (v3) from http://apps.cbu.uib.no/crest and http://lcaclassifier.googlecode.com. PMID:23145153

  13. Expert review for GHS classification of chemicals on health effects.

    PubMed

    Morita, Takeshi; Morikawa, Kaoru

    2011-01-01

    Intoxication as a result of chemical accidents is a major issue in industrial health. The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) provides a framework for hazard communication on chemicals using labelling or safety data sheets. The GHS will be expected to reduce the number of chemical accidents by communicating the hazards posed and prompting safety measures to be taken. One of the issues which may be a barrier to effective implementation of the GHS results from discrepancies in GHS classifications of chemicals across countries/regions. The main reasons are the differences in information sources used and in the expertise of people making the classification (Classifiers). The GHS requests expert judgment in a weight of evidence (WOE) approach in the application of the criteria of classification. A WOE approach is an assessment method that considers all available information bearing on the determination of toxicity. The quality and consistency of the data, study design, mechanism or mode of action, dose-effect relationships and biological relevance should be taken into account. Therefore, expert review should be necessary to classify chemicals accurately. However, the GHS does not provide any information on the required level of expertise of the Classifiers, definition of who qualifies as an expert, evaluation methods of WOE or data quality, and the timing of expert judgment and the need for updating/re-classification as new information becomes available. In this paper, key methods and issues in expert reviews are discussed. Examples of expert reviews and recommendations for harmonized classification are also presented.

  14. Expert review for GHS classification of chemicals on health effects.

    PubMed

    Morita, Takeshi; Morikawa, Kaoru

    2011-01-01

    Intoxication as a result of chemical accidents is a major issue in industrial health. The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) provides a framework for hazard communication on chemicals using labelling or safety data sheets. The GHS will be expected to reduce the number of chemical accidents by communicating the hazards posed and prompting safety measures to be taken. One of the issues which may be a barrier to effective implementation of the GHS results from discrepancies in GHS classifications of chemicals across countries/regions. The main reasons are the differences in information sources used and in the expertise of people making the classification (Classifiers). The GHS requests expert judgment in a weight of evidence (WOE) approach in the application of the criteria of classification. A WOE approach is an assessment method that considers all available information bearing on the determination of toxicity. The quality and consistency of the data, study design, mechanism or mode of action, dose-effect relationships and biological relevance should be taken into account. Therefore, expert review should be necessary to classify chemicals accurately. However, the GHS does not provide any information on the required level of expertise of the Classifiers, definition of who qualifies as an expert, evaluation methods of WOE or data quality, and the timing of expert judgment and the need for updating/re-classification as new information becomes available. In this paper, key methods and issues in expert reviews are discussed. Examples of expert reviews and recommendations for harmonized classification are also presented. PMID:21804272

  15. CREST--classification resources for environmental sequence tags.

    PubMed

    Lanzén, Anders; Jørgensen, Steffen L; Huson, Daniel H; Gorfer, Markus; Grindhaug, Svenn Helge; Jonassen, Inge; Øvreås, Lise; Urich, Tim

    2012-01-01

    Sequencing of taxonomic or phylogenetic markers is becoming a fast and efficient method for studying environmental microbial communities. This has resulted in a steadily growing collection of marker sequences, most notably of the small-subunit (SSU) ribosomal RNA gene, and an increased understanding of microbial phylogeny, diversity and community composition patterns. However, to utilize these large datasets together with new sequencing technologies, a reliable and flexible system for taxonomic classification is critical. We developed CREST (Classification Resources for Environmental Sequence Tags), a set of resources and tools for generating and utilizing custom taxonomies and reference datasets for classification of environmental sequences. CREST uses an alignment-based classification method with the lowest common ancestor algorithm. It also uses explicit rank similarity criteria to reduce false positives and identify novel taxa. We implemented this method in a web server, a command line tool and the graphical user interfaced program MEGAN. Further, we provide the SSU rRNA reference database and taxonomy SilvaMod, derived from the publicly available SILVA SSURef, for classification of sequences from bacteria, archaea and eukaryotes. Using cross-validation and environmental datasets, we compared the performance of CREST and SilvaMod to the RDP Classifier. We also utilized Greengenes as a reference database, both with CREST and the RDP Classifier. These analyses indicate that CREST performs better than alignment-free methods with higher recall rate (sensitivity) as well as precision, and with the ability to accurately identify most sequences from novel taxa. Classification using SilvaMod performed better than with Greengenes, particularly when applied to environmental sequences. CREST is freely available under a GNU General Public License (v3) from http://apps.cbu.uib.no/crest and http://lcaclassifier.googlecode.com.

  16. Automatic image classification for the urinoculture screening.

    PubMed

    Andreini, Paolo; Bonechi, Simone; Bianchini, Monica; Garzelli, Andrea; Mecocci, Alessandro

    2016-03-01

    Urinary tract infections (UTIs) are considered to be the most common bacterial infection and, actually, it is estimated that about 150 million UTIs occur world wide yearly, giving rise to roughly $6 billion in healthcare expenditures and resulting in 100,000 hospitalizations. Nevertheless, it is difficult to carefully assess the incidence of UTIs, since an accurate diagnosis depends both on the presence of symptoms and on a positive urinoculture, whereas in most outpatient settings this diagnosis is made without an ad hoc analysis protocol. On the other hand, in the traditional urinoculture test, a sample of midstream urine is put onto a Petri dish, where a growth medium favors the proliferation of germ colonies. Then, the infection severity is evaluated by a visual inspection of a human expert, an error prone and lengthy process. In this paper, we propose a fully automated system for the urinoculture screening that can provide quick and easily traceable results for UTIs. Based on advanced image processing and machine learning tools, the infection type recognition, together with the estimation of the bacterial load, can be automatically carried out, yielding accurate diagnoses. The proposed AID (Automatic Infection Detector) system provides support during the whole analysis process: first, digital color images of Petri dishes are automatically captured, then specific preprocessing and spatial clustering algorithms are applied to isolate the colonies from the culture ground and, finally, an accurate classification of the infections and their severity evaluation are performed. The AID system speeds up the analysis, contributes to the standardization of the process, allows result repeatability, and reduces the costs. Moreover, the continuous transition between sterile and external environments (typical of the standard analysis procedure) is completely avoided. PMID:26780249

  17. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  18. Classification and identification of amino acids based on THz spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Ping J.; Ma, Ye H.; Li, Xian; Hou, Di B.; Cai, Jin H.; Zhang, Guang X.

    2015-11-01

    Amino acids are important nutrient substances for life, and many of them have several isomerides, while only L-type amino acids can be absorbed by body as nutrients. So it is certain worth to accurately classify and identify amino acids. In this paper, terahertz time-domain spectroscopy (THz-TDS) was used to detect isomers of various amino acids to obtain their absorption spectra, and their spectral characteristics were analyzed and compared. Results show that not all isomerides of amino acids have unique spectral characteristics, causing the difficulty of classification and identification. To solve this problem, partial least squares discriminant analysis (PLS-DA), firstly, was performed on extracting principal component of THz spectroscopy and classifying amino acids. Moreover, variable selection (VS) was employed to optimize spectral interval of feature extraction to improve analysis effect. As a result, the optimal classification model was determined and most samples can be accurately classified. Secondly, for each class of amino acids, PLS-DA combined with VS was also applied to identify isomerides. This work provides a suggestion for material classification and identification with THz spectroscopy.

  19. A fast SCOP fold classification system using content-based E-Predict algorithm

    PubMed Central

    Chi, Pin-Hao; Shyu, Chi-Ren; Xu, Dong

    2006-01-01

    Background Domain experts manually construct the Structural Classification of Protein (SCOP) database to categorize and compare protein structures. Even though using the SCOP database is believed to be more reliable than classification results from other methods, it is labor intensive. To mimic human classification processes, we develop an automatic SCOP fold classification system to assign possible known SCOP folds and recognize novel folds for newly-discovered proteins. Results With a sufficient amount of ground truth data, our system is able to assign the known folds for newly-discovered proteins in the latest SCOP v1.69 release with 92.17% accuracy. Our system also recognizes the novel folds with 89.27% accuracy using 10 fold cross validation. The average response time for proteins with 500 and 1409 amino acids to complete the classification process is 4.1 and 17.4 seconds, respectively. By comparison with several structural alignment algorithms, our approach outperforms previous methods on both the classification accuracy and efficiency. Conclusion In this paper, we build an advanced, non-parametric classifier to accelerate the manual classification processes of SCOP. With satisfactory ground truth data from the SCOP database, our approach identifies relevant domain knowledge and yields reasonably accurate classifications. Our system is publicly accessible at . PMID:16872501

  20. HIV classification using coalescent theory

    SciTech Connect

    Zhang, Ming; Letiner, Thomas K; Korber, Bette T

    2008-01-01

    Algorithms for subtype classification and breakpoint detection of HIV-I sequences are based on a classification system of HIV-l. Hence, their quality highly depend on this system. Due to the history of creation of the current HIV-I nomenclature, the current one contains inconsistencies like: The phylogenetic distance between the subtype B and D is remarkably small compared with other pairs of subtypes. In fact, it is more like the distance of a pair of subsubtypes Robertson et al. (2000); Subtypes E and I do not exist any more since they were discovered to be composed of recombinants Robertson et al. (2000); It is currently discussed whether -- instead of CRF02 being a recombinant of subtype A and G -- subtype G should be designated as a circulating recombination form (CRF) nd CRF02 as a subtype Abecasis et al. (2007); There are 8 complete and over 400 partial HIV genomes in the LANL-database which belong neither to a subtype nor to a CRF (denoted by U). Moreover, the current classification system is somehow arbitrary like all complex classification systems that were created manually. To this end, it is desirable to deduce the classification system of HIV systematically by an algorithm. Of course, this problem is not restricted to HIV, but applies to all fast mutating and recombining viruses. Our work addresses the simpler subproblem to score classifications of given input sequences of some virus species (classification denotes a partition of the input sequences in several subtypes and CRFs). To this end, we reconstruct ancestral recombination graphs (ARG) of the input sequences under restrictions determined by the given classification. These restritions are imposed in order to ensure that the reconstructed ARGs do not contradict the classification under consideration. Then, we find the ARG with maximal probability by means of Markov Chain Monte Carlo methods. The probability of the most probable ARG is interpreted as a score for the classification. To our

  1. MEASURING DIETARY EXPOSURE OF YOUNG CHILDREN

    EPA Science Inventory

    Young children do not consume foods in a structured manner. Their foods contact surfaces (hands, floors, eating surfaces, etc.) that may be contaminated while they are eating them. Thus, dietary exposures of young children are difficult to accurately assess or measure. A recen...

  2. Classification of cell death

    PubMed Central

    Kroemer, G; Galluzzi, L; Vandenabeele, P; Abrams, J; Alnemri, ES; Baehrecke, EH; Blagosklonny, MV; El-Deiry, WS; Golstein, P; Green, DR; Hengartner, M; Knight, RA; Kumar, S; Lipton, SA; Malorni, W; Nuñez, G; Peter, ME; Tschopp, J; Yuan, J; Piacentini, M; Zhivotovsky, B; Melino, G

    2009-01-01

    Different types of cell death are often defined by morphological criteria, without a clear reference to precise biochemical mechanisms. The Nomenclature Committee on Cell Death (NCCD) proposes unified criteria for the definition of cell death and of its different morphologies, while formulating several caveats against the misuse of words and concepts that slow down progress in the area of cell death research. Authors, reviewers and editors of scientific periodicals are invited to abandon expressions like ‘percentage apoptosis’ and to replace them with more accurate descriptions of the biochemical and cellular parameters that are actually measured. Moreover, at the present stage, it should be accepted that caspase-independent mechanisms can cooperate with (or substitute for) caspases in the execution of lethal signaling pathways and that ‘autophagic cell death’ is a type of cell death occurring together with (but not necessarily by) autophagic vacuolization. This study details the 2009 recommendations of the NCCD on the use of cell death-related terminology including ‘entosis’, ‘mitotic catastrophe’, ‘necrosis’, ‘necroptosis’ and ‘pyroptosis’. PMID:18846107

  3. Central Texas coastal classification maps - Aransas Pass to Mansfield Channel

    USGS Publications Warehouse

    Morton, Robert A.; Peterson, Russell L.

    2006-01-01

    The primary purpose of the USGS National Assessment of Coastal Change Project is to provide accurate representations of pre-storm ground conditions for areas that are designated high priority because they have dense populations or valuable resources that are at risk from storm waves. A secondary purpose of the project is to develop a geomorphic (land feature) coastal classification that, with only minor modification, can be applied to most coastal regions in the United States. A Coastal Classification Map describing local geomorphic features is the first step toward determining the hazard vulnerability of an area. The Coastal Classification Maps of the National Assessment of Coastal Change Project present ground conditions such as beach width, dune elevations, overwash potential, and density of development. In order to complete a hazard-vulnerability assessment, that information must be integrated with other information, such as prior storm impacts and beach stability. The Coastal Classification Maps provide much of the basic information for such an assessment and represent a critical component of a storm-impact forecasting capability.

  4. Airborne lidar intensity calibration and application for land use classification

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wang, Cheng; Luo, She-Zhou; Zuo, Zheng-Li

    2014-11-01

    Airborne Light Detection and Ranging (LiDAR) is an active remote sensing technology which can acquire the topographic information efficiently. It can record the accurate 3D coordinates of the targets and also the signal intensity (the amplitude of backscattered echoes) which represents reflectance characteristics of targets. The intensity data has been used in land use classification, vegetation fractional cover and leaf area index (LAI) estimation. Apart from the reflectance characteristics of the targets, the intensity data can also be influenced by many other factors, such as flying height, incident angle, atmospheric attenuation, laser pulse power and laser beam width. It is therefore necessary to calibrate intensity values before further applications. In this study, we analyze the factors affecting LiDAR intensity based on radar range equation firstly, and then applying the intensity calibration method, which includes the sensor-to-target distance and incident angle, to the laser intensity data over the study area. Finally the raw LiDAR intensity and normalized intensity data are used for land use classification along with LiDAR elevation data respectively. The results show that the classification accuracy from the normalized intensity data is higher than that from raw LiDAR intensity data and also indicate that the calibration of LiDAR intensity data is necessary in the application of land use classification.

  5. Impact of Information based Classification on Network Epidemics

    NASA Astrophysics Data System (ADS)

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-06-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results.

  6. Rapid honey characterization and botanical classification by an electronic tongue.

    PubMed

    Major, Nikola; Marković, Ksenija; Krpan, Marina; Sarić, Goran; Hruškar, Mirjana; Vahčić, Nada

    2011-07-15

    In this paper a commercial electronic tongue (αAstree, Alpha M.O.S.) was applied for botanical classification and physicochemical characterization of honey samples. The electronic tongue was comprised of seven potentiometric sensors coupled with an Ag/AgCl reference electrode. Botanical classification was performed by PCA, CCA and ANN modeling on 12 samples of acacia, chestnut and honeydew honey. The physicochemical characterization of honey was obtained by ANN modeling and the parameters included were electrical conductivity, acidity, water content, invert sugar and total sugar. The initial reference values for the physicochemical parameters observed were determined by traditional methods. Botanical classification of honey samples obtained by ANN was 100% accurate while the highest correlation between observed and predicted values was obtained for electrical conductivity (0.999), followed by acidity (0.997), water content (0.994), invert sugar content (0.988) and total sugar content (0.979). All developed ANN models for rapid honey characterization and botanical classification performed excellently showing the potential of the electronic tongue as a tool in rapid honey analysis and characterization. The advantage of using such a technique is a simple sample preparation procedure, there are no chemicals involved and there are no additional costs except the initial measurements required for ANN model development.

  7. Registration and classification of adolescent and young adult cancer cases.

    PubMed

    Pollock, Brad H; Birch, Jillian M

    2008-05-01

    Cancer registries are an important research resource that facilitate the study of etiology, tumor biology, patterns of delayed diagnosis and health planning needs. When outcome data are included, registries can track secular changes in survival related to improvements in early detection or treatment. The surveillance, epidemiology, and end results (SEER) registry has been used to identify major gaps in survival for older adolescent and young adult (AYA) patients compared with younger children and older adults. In order to determine the reasons for this gap, the complete registration and accurate classification of AYA malignancies is necessary. There are inconsistencies in defining the age limits for AYAs although the Adolescent and Young Adult Oncology Progress Review Group proposed a definition of ages 15 through 39 years. The central registration and classification issues for AYAs are case-finding, defining common data elements (CDE) collected across different registries and the diagnostic classification of these malignancies. Goals to achieve by 2010 include extending and validating current diagnostic classification schemes and expanding the CDE to support AYA oncology research, including the collection of tracking information to assess long-term outcomes. These efforts will advance preventive, etiologic, therapeutic, and health services-related research for this understudied age group.

  8. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  9. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  10. Classification of salivary gland tumours--a brief histopathological review.

    PubMed

    Simpson, R H

    1995-07-01

    Tumours of the salivary glands display a wide variety of histological appearances, and vary in behaviour from totally benign to high grade and usually fatal malignancies. Over the past 40 years several classification schemes have been proposed, of which the most comprehensive and accurate are those of the Armed Forces Institute of Pathology (AFIP) and the World Health Organization (WHO) which were both revised in 1991. They are readily applicable by practising surgical pathologists, and encompass most of the range of tumours likely to be encountered. If I have a slight preference, it is for the WHO classification which is more concise. This paper briefly discusses each tumour, and highlights the changes from previous classifications, including the proper recognition of several newly described tumours which are distinct clinico-pathological entities. Neither of the new schemes solves every problem, and brief attention is drawn to defects. These are minor, and do not significantly detract from the advantages of both new classifications, which represent a major advance in our ability to understand these often perplexing tumours.

  11. A meta-analytic approach for characterizing the within-worker and between-worker sources of variation in occupational exposure.

    PubMed

    Symanski, Elaine; Maberti, Silvia; Chan, Wenyaw

    2006-06-01

    While many studies have quantified the sources of variation in exposure to workplace contaminants for individual groups of workers, patterns of exposure variability have not been investigated since a comprehensive evaluation was carried out over 10 years ago. Therefore, a systematic review of the literature was conducted to identify studies that applied the one-way random-effects model to describe exposure profiles of groups of workers classified on the basis of the kind of work performed and where it was performed. Quantitative estimates of the sources of variation in exposure along with information related to the workplace, contaminant and sampling strategy were compiled. For subsets of the data, based upon the classification scheme used to group workers, weighted empirical cumulative distribution functions were constructed and compared using the non-parametric Kolomogorov-Smirnov two-sample test. Further stratifications evaluated differences by industry, agent and characteristics of the sampling strategy. The review identified nearly 60 studies that examined the within-worker and between-worker sources of variation in exposure to workplace contaminants. In pooling results across studies, the between-worker variability increased as workers were aggregated across jobs and locations. The within-worker variability for an occupational group of workers was generally larger than the between-worker variability, although the differences in the variation in exposures across work shifts relative to the variation among workers' mean exposure levels diminished as groups were combined across jobs and locations. On average, gaseous exposures were more homogeneous than exposures to aerosols or dermal agents as were exposures in the chemical industry compared with the non-chemical industry. The design of sampling strategies also plays an important role with greater variability among groups of workers who were sampled randomly rather than systematically; in addition, differences

  12. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...

  13. ADP computer security classification program

    SciTech Connect

    Augustson, S.J.

    1984-01-01

    CG-ADP-1, the Automatic Data Processing Security Classification Guide, provides for classification guidance (for security information) concerning the protection of Department of Energy (DOE) and DOE contractor Automatic Data Processing (ADP) systems which handle classified information. Within the DOE, ADP facilities that process classified information provide potentially lucrative targets for compromise. In conjunction with the security measures required by DOE regulations, necessary precautions must be taken to protect details of those ADP security measures which could aid in their own subversion. Accordingly, the basic principle underlying ADP security classification policy is to protect information which could be of significant assistance in gaining unauthorized access to classified information being processed at an ADP facility. Given this policy, classification topics and guidelines are approved for implementation. The basic program guide, CG-ADP-1 is broad in scope and based upon it, more detailed local guides are sometimes developed and approved for specific sites. Classification topics are provided for system features, system and security management, and passwords. Site-specific topics can be addressed in local guides if needed.

  14. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  15. Similarity-Based Classification in Partially Labeled Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Lü, Linyuan

    Two main difficulties in the problem of classification in partially labeled networks are the sparsity of the known labeled nodes and inconsistency of label information. To address these two difficulties, we propose a similarity-based method, where the basic assumption is that two nodes are more likely to be categorized into the same class if they are more similar. In this paper, we introduce ten similarity indices defined based on the network structure. Empirical results on the co-purchase network of political books show that the similarity-based method can, to some extent, overcome these two difficulties and give higher accurate classification than the relational neighbors method, especially when the labeled nodes are sparse. Furthermore, we find that when the information of known labeled nodes is sufficient, the indices considering only local information can perform as good as those global indices while having much lower computational complexity.

  16. Learning to Shift the Polarity of Words for Sentiment Classification

    NASA Astrophysics Data System (ADS)

    Ikeda, Daisuke; Takamura, Hiroya; Okumura, Manabu

    We propose a machine learning based method of sentiment classification of sentences using word-level polarity. The polarities of words in a sentence are not always the same as that of the sentence, because there can be polarity-shifters such as negation expressions. The proposed method models the polarity-shifters. Our model can be trained in two different ways: word-wise and sentence-wise learning. In sentence-wise learning, the model can be trained so that the prediction of sentence polarities should be accurate. The model can also combined with features used in previous work such as bag-of-words and n-grams. We empirically show that our method improves the performance of sentiment classification of sentences especially when we have only small amount of training data.

  17. Imaging lexicon for acute pancreatitis: 2012 Atlanta Classification revisited.

    PubMed

    Sureka, Binit; Bansal, Kalpana; Patidar, Yashwant; Arora, Ankur

    2016-02-01

    The original 1992 Atlanta Classification System for acute pancreatitis was revised in 2012 by the Atlanta Working Group, assisted by various national and international societies, through web-based consensus. This revised classification identifies two phases of acute pancreatitis: early and late. Acute pancreatitis can be either oedematous interstitial pancreatitis or necrotizing pancreatitis. Severity of the disease is categorized into three levels: mild, moderately severe and severe, depending upon organ failure and local/systemic complications. According to the type of pancreatitis, collections are further divided into acute peripancreatic fluid collection, pseudocyst, acute necrotic collection, and walled-off necrosis. Insight into the revised terminology is essential for accurate communication of imaging findings. In this review article, we will summarize the updated nomenclature and illustrate corresponding imaging findings using examples.

  18. Non-Destructive Classification Approaches for Equilbrated Ordinary Chondrites

    NASA Technical Reports Server (NTRS)

    Righter, K.; Harrington, R.; Schroeder, C.; Morris, R. V.

    2013-01-01

    Classification of meteorites is most effectively carried out by petrographic and mineralogic studies of thin sections, but a rapid and accurate classification technique for the many samples collected in dense collection areas (hot and cold deserts) is of great interest. Oil immersion techniques have been used to classify a large proportion of the US Antarctic meteorite collections since the mid-1980s [1]. This approach has allowed rapid characterization of thousands of samples over time, but nonetheless utilizes a piece of the sample that has been ground to grains or a powder. In order to compare a few non-destructive techniques with the standard approaches, we have characterized a group of chondrites from the Larkman Nunatak region using magnetic susceptibility and Moessbauer spectroscopy.

  19. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion.

    PubMed

    Du, Shichuan; Martinez, Aleix M

    2013-03-18

    Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10-20 ms), even at low resolutions. Fear and anger are recognized the slowest (100-250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70-200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models.

  20. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409