Sample records for subvolume classification applied

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, D; Aryal, M; Samuels, S

    Purpose: A previous study showed that large sub-volumes of tumor with low blood volume (BV) (poorly perfused) in head-and-neck (HN) cancers are significantly associated with local-regional failure (LRF) after chemoradiation therapy, and could be targeted with intensified radiation doses. This study aimed to develop an automated and scalable model to extract voxel-wise contrast-enhanced temporal features of dynamic contrastenhanced (DCE) MRI in HN cancers for predicting LRF. Methods: Our model development consists of training and testing stages. The training stage includes preprocessing of individual-voxel DCE curves from tumors for intensity normalization and temporal alignment, temporal feature extraction from the curves, featuremore » selection, and training classifiers. For feature extraction, multiresolution Haar discrete wavelet transformation is applied to each DCE curve to capture temporal contrast-enhanced features. The wavelet coefficients as feature vectors are selected. Support vector machine classifiers are trained to classify tumor voxels having either low or high BV, for which a BV threshold of 7.6% is previously established and used as ground truth. The model is tested by a new dataset. The voxel-wise DCE curves for training and testing were from 14 and 8 patients, respectively. A posterior probability map of the low BV class was created to examine the tumor sub-volume classification. Voxel-wise classification accuracy was computed to evaluate performance of the model. Results: Average classification accuracies were 87.2% for training (10-fold crossvalidation) and 82.5% for testing. The lowest and highest accuracies (patient-wise) were 68.7% and 96.4%, respectively. Posterior probability maps of the low BV class showed the sub-volumes extracted by our model similar to ones defined by the BV maps with most misclassifications occurred near the sub-volume boundaries. Conclusion: This model could be valuable to support adaptive clinical trials with further validation. The framework could be extendable and scalable to extract temporal contrastenhanced features of DCE-MRI in other tumors. We would like to acknowledge NIH for funding support: UO1 CA183848.« less

  2. Sub-Volumetric Classification and Visualization of Emphysema Using a Multi-Threshold Method and Neural Network

    NASA Astrophysics Data System (ADS)

    Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki

    Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the distribution of emphysema severity in the lung also plays an important role in the assessment of the overall functionality of the lung. We confirm our hypothesis by showing that the proposed sub-volumetric classification results correlate with the parameters of pulmonary function tests better than classical methods. We also visualize emphysema using a technique called the transparent lung model.

  3. Physiological Imaging-Defined, Response-Driven Subvolumes of a Tumor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farjam, Reza; Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan; Tsien, Christina I.

    2013-04-01

    Purpose: To develop an image analysis framework to delineate the physiological imaging-defined subvolumes of a tumor in relating to treatment response and outcome. Methods and Materials: Our proposed approach delineates the subvolumes of a tumor based on its heterogeneous distributions of physiological imaging parameters. The method assigns each voxel a probabilistic membership function belonging to the physiological parameter classes defined in a sample of tumors, and then calculates the related subvolumes in each tumor. We applied our approach to regional cerebral blood volume (rCBV) and Gd-DTPA transfer constant (K{sup trans}) images of patients who had brain metastases and were treatedmore » by whole-brain radiation therapy (WBRT). A total of 45 lesions were included in the analysis. Changes in the rCBV (or K{sup trans})–defined subvolumes of the tumors from pre-RT to 2 weeks after the start of WBRT (2W) were evaluated for differentiation of responsive, stable, and progressive tumors using the Mann-Whitney U test. Performance of the newly developed metrics for predicting tumor response to WBRT was evaluated by receiver operating characteristic (ROC) curve analysis. Results: The percentage decrease in the high-CBV-defined subvolumes of the tumors from pre-RT to 2W was significantly greater in the group of responsive tumors than in the group of stable and progressive tumors (P<.007). The change in the high-CBV-defined subvolumes of the tumors from pre-RT to 2W was a predictor for post-RT response significantly better than change in the gross tumor volume observed during the same time interval (P=.012), suggesting that the physiological change occurs before the volumetric change. Also, K{sup trans} did not add significant discriminatory information for assessing response with respect to rCBV. Conclusion: The physiological imaging-defined subvolumes of the tumors delineated by our method could be candidates for boost target, for which further development and evaluation is warranted.« less

  4. Risk-adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Kim, Yusung

    Currently, there is great interest in integrating biological information into intensity-modulated radiotherapy (IMRT) treatment planning with the aim of boosting high-risk tumor subvolumes. Selective boosting of tumor subvolumes can be accomplished without violating normal tissue complication constraints using information from functional imaging. In this work we have developed a risk-adaptive optimization-framework that utilizes a nonlinear biological objective function. Employing risk-adaptive radiotherapy for prostate cancer, it is possible to increase the equivalent uniform dose (EUD) by up to 35.4 Gy in tumor subvolumes having the highest risk classification without increasing normal tissue complications. Subsequently, we have studied the impact of functional imaging accuracy, and found on the one hand that loss in sensitivity had a large impact on expected local tumor control, which was maximal when a low-risk classification for the remaining low risk PTV was chosen. While on the other hand loss in specificity appeared to have a minimal impact on normal tissue sparing. Therefore, it appears that in order to improve the therapeutic ratio a functional imaging technique with a high sensitivity, rather than specificity, is needed. Last but not least a comparison study between selective boosting IMRT strategies and uniform-boosting IMRT strategies yielding the same EUD to the overall PTV was carried out, and found that selective boosting IMRT considerably improves expected TCP compared to uniform-boosting IMRT, especially when lack of control of the high-risk tumor subvolumes is the cause of expected therapy failure. Furthermore, while selective boosting IMRT, using physical dose-volume objectives, did yield similar rectal and bladder sparing when compared its equivalent uniform-boosting IMRT plan, risk-adaptive radiotherapy, utilizing biological objective functions, did yield a 5.3% reduction in NTCP for the rectum. Hence, in risk-adaptive radiotherapy the therapeutic ratio can be increased over that which can be achieved with conventional selective boosting IMRT using physical dose-volume objectives. In conclusion, a novel risk-adaptive radiotherapy strategy is proposed and promises increased expected local control for locoregionally advanced tumors with equivalent or better normal tissue sparing.

  5. SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, J; Liang, J; Chen, S

    Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition wasmore » used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ. Without including this effect in DIR, treatment dose in normal organ could be underestimated affecting treatment evaluation and planning modification. Acknowledgement: Partially Supported by Elekta Research Grant.« less

  6. Kirkwood-Buff integrals of finite systems: shape effects

    NASA Astrophysics Data System (ADS)

    Dawass, Noura; Krüger, Peter; Simon, Jean-Marc; Vlugt, Thijs J. H.

    2018-06-01

    The Kirkwood-Buff (KB) theory provides an important connection between microscopic density fluctuations in liquids and macroscopic properties. Recently, Krüger et al. derived equations for KB integrals for finite subvolumes embedded in a reservoir. Using molecular simulation of finite systems, KB integrals can be computed either from density fluctuations inside such subvolumes, or from integrals of radial distribution functions (RDFs). Here, based on the second approach, we establish a framework to compute KB integrals for subvolumes with arbitrary convex shapes. This requires a geometric function w(x) which depends on the shape of the subvolume, and the relative position inside the subvolume. We present a numerical method to compute w(x) based on Umbrella Sampling Monte Carlo (MC). We compute KB integrals of a liquid with a model RDF for subvolumes with different shapes. KB integrals approach the thermodynamic limit in the same way: for sufficiently large volumes, KB integrals are a linear function of area over volume, which is independent of the shape of the subvolume.

  7. Four-Dimensional Positron Emission Tomography: Implications for Dose Painting of High-Uptake Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristophanous, Michalis, E-mail: maristophanous@lroc.harvard.edu; Yap, Jeffrey T.; Killoran, Joseph H.

    Purpose: To investigate the behavior of tumor subvolumes of high [18F]-fluorodeoxyglucose (FDG) uptake as seen on clinical four-dimensional (4D) FDG-positron emission tomography (PET) scans. Methods and Materials: Four-dimensional FDG-PET/computed tomography scans from 13 patients taken before radiotherapy were available. The analysis was focused on regions of high uptake that are potential dose-painting targets. A total of 17 lesions (primary tumors and lymph nodes) were analyzed. On each one of the five phases of the 4D scan a classification algorithm was applied to obtain the region of highest uptake and segment the tumor volume. We looked at the behavior of bothmore » the high-uptake subvolume, called 'Boost,' and the segmented tumor volume, called 'Target.' We measured several quantities that characterize the Target and Boost volumes and quantified correlations between them. Results: The behavior of the Target could not always predict the behavior of the Boost. The shape deformation of the Boost regions was on average 133% higher than that of the Target. The gross to internal target volume expansion was on average 27.4% for the Target and 64% for the Boost, a statistically significant difference (p < 0.05). Finally, the inhale-to-exhale phase (20%) had the highest shape deformation for the Boost regions. Conclusions: A complex relationship between the measured quantities for the Boost and Target volumes is revealed. The results suggest that in cases in which advanced therapy techniques such as dose painting are being used, a close examination of the 4D PET scan should be performed.« less

  8. Thermoviscoplastic analysis of fibrous periodic composites using triangular subvolumes

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.

    1993-01-01

    The nonlinear viscoplastic behavior of fibrous periodic composites is analyzed by discretizing the unit cell into triangular subvolumes. A set of these subvolumes can be configured by the analyst to construct a representation for the unit cell of a periodic composite. In each step of the loading history, the total strain increment at any point is governed by an integral equation which applies to the entire composite. A Fourier series approximation allows the incremental stresses and strains to be determined within a unit cell of the periodic lattice. The nonlinearity arising from the viscoplastic behavior of the constituent materials comprising the composite is treated as fictitious body force in the governing integral equation. Specific numerical examples showing the stress distributions in the unit cell of a fibrous tungsten/copper metal matrix composite under viscoplastic loading conditions are given. The stress distribution resulting in the unit cell when the composite material is subjected to an overall transverse stress loading history perpendicular to the fibers is found to be highly heterogeneous, and typical homogenization techniques based on treating the stress and strain distributions within the constituent phases as homogeneous result in large errors under inelastic loading conditions.

  9. Thermoviscoplastic analysis of fibrous periodic composites by the use of triangular subvolumes

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.

    1994-01-01

    The non-linear viscoplastic behavior of fibrous periodic composites is analyzed by discretizing the unit cell into triangular subvolumes. A set of these subvolumes can be configured by the analyst to construct a representation for the unit cell of a periodic composite. In each step of the loading history the total strain increment at any point is governed by an integral equation which applies to the entire composite. A Fourier series approximation allows the incremental stresses and strains to be determined within a unit cell of the periodic lattice. The non-linearity arising from the viscoplastic behavior of the constituent materials comprising the composite is treated as a fictitious body force in the governing integral equation. Specific numerical examples showing the stress distributions in the unit cell of a fibrous tungsten/copper metal-matrix composite under viscoplastic loading conditions are given. The stress distribution resulting in the unit cell when the composite material is subjected to an overall transverse stress loading history perpendicular to the fibers is found to be highly heterogeneous, and typical homogenization techniques based on treating the stress and strain distributions within the constituent phases as homogeneous result in large errors under inelastic loading conditions.

  10. Simulation of stochastic diffusion via first exit times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lötstedt, Per, E-mail: perl@it.uu.se; Meinecke, Lina, E-mail: lina.meinecke@it.uu.se

    2015-11-01

    In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a methodmore » based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions.« less

  11. Simulation of stochastic diffusion via first exit times

    PubMed Central

    Lötstedt, Per; Meinecke, Lina

    2015-01-01

    In molecular biology it is of interest to simulate diffusion stochastically. In the mesoscopic model we partition a biological cell into unstructured subvolumes. In each subvolume the number of molecules is recorded at each time step and molecules can jump between neighboring subvolumes to model diffusion. The jump rates can be computed by discretizing the diffusion equation on that unstructured mesh. If the mesh is of poor quality, due to a complicated cell geometry, standard discretization methods can generate negative jump coefficients, which no longer allows the interpretation as the probability to jump between the subvolumes. We propose a method based on the mean first exit time of a molecule from a subvolume, which guarantees positive jump coefficients. Two approaches to exit times, a global and a local one, are presented and tested in simulations on meshes of different quality in two and three dimensions. PMID:26600600

  12. Instrumentation for Applied Physics and Industrial Applications

    NASA Astrophysics Data System (ADS)

    Hillemanns, H.; Le Goff, J.-M.

    This document is part of Part 2 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '7.3 Instrumentation for Applied Physics and Industrial Applications' of Chapter '7 Applications of Detectors in Technology; Medicine and Other Fields' with the content:

  13. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to themore » respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor subvolume with R{sub rigid} = − 0.60 (p = 0.01) and R{sub deformable} = − 0.46 (p = 0.01–0.20) averaging over all deformable algorithms. For stable-disease subvolumes, the correlations were significant (p < 0.001) for all registration algorithms with R{sub rigid} = − 0.71 and R{sub deformable} = − 0.72. Progressor and stable-disease subvolumes resulting from rigid registration were in excellent agreement (DSI > 70%) for RMS{sub rigid} < 150 HU. However, tumor deformation was observed to have negligible effect on DSI for responder subvolumes with insignificant |R| < 0.26, p > 0.27. Conclusions: This study demonstrated that deformable algorithms cannot be arbitrarily chosen; different deformable algorithms can result in large differences of voxel-based PET image comparison. For low tumor deformation (RMS{sub rigid} < 150 HU), rigid and deformable algorithms yield similar results, suggesting deformable registration is not required for these cases.« less

  14. METHOD OF PRODUCING ENERGETIC PLASMA FOR NEUTRON PRODUCTION

    DOEpatents

    Bell, P.R.; Simon, A.; Mackin, R.J. Jr.

    1961-01-24

    A method is given for producing an energetic plasma for neutron production. An energetic plasma is produced in a small magnetically confined subvolume of the device by providing a selected current of energetic molecular ions at least greater than that required for producing a current of atomic ions sufficient to achieve "burnout" of neutral particles in the subvolume. The atomic ions are provided by dissociation of the molecular ions by an energetic arc discharge within the subvolume. After burnout, the arc discharge is terminated, the magnetic fields increased, and cold fuel feed is substituted for the molecular ions. After the subvolume is filled with an energetic plasma, the size of the magnetically confined subvolume is gradually increased until the entire device is filled with an energetic neutron producing plasma. The reactions which take place in the device to produce neutrons will generate a certain amount of heat energy which may be converted by the use of a conventional heat cycle to produce electrical energy.

  15. Arterial Perfusion Imaging–Defined Subvolume of Intrahepatic Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hesheng, E-mail: hesheng@umich.edu; Farjam, Reza; Feng, Mary

    2014-05-01

    Purpose: To assess whether an increase in a subvolume of intrahepatic tumor with elevated arterial perfusion during radiation therapy (RT) predicts tumor progression after RT. Methods and Materials: Twenty patients with unresectable intrahepatic cancers undergoing RT were enrolled in a prospective, institutional review board–approved study. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) was performed before RT (pre-RT), after delivering ∼60% of the planned dose (mid-RT) and 1 month after completion of RT to quantify hepatic arterial perfusion. The arterial perfusions of the tumors at pre-RT were clustered into low-normal and elevated perfusion by a fuzzy clustering-based method, and the tumor subvolumesmore » with elevated arterial perfusion were extracted from the hepatic arterial perfusion images. The percentage changes in the tumor subvolumes and means of arterial perfusion over the tumors from pre-RT to mid-RT were evaluated for predicting tumor progression post-RT. Results: Of the 24 tumors, 6 tumors in 5 patients progressed 5 to 21 months after RT completion. Neither tumor volumes nor means of tumor arterial perfusion at pre-RT were predictive of treatment outcome. The mean arterial perfusion over the tumors increased significantly at mid-RT in progressive tumors compared with the responsive tumors (P=.006). From pre-RT to mid-RT, the responsive tumors had a decrease in the tumor subvolumes with elevated arterial perfusion (median, −14%; range, −75% to 65%), whereas the progressive tumors had an increase of the subvolumes (median, 57%; range, −7% to 165%) (P=.003). Receiver operating characteristic analysis of the percentage change in the subvolume for predicting tumor progression post-RT had an area under the curve of 0.90. Conclusion: The increase in the subvolume of the intrahepatic tumor with elevated arterial perfusion during RT has the potential to be a predictor for tumor progression post-RT. The tumor subvolume could be a radiation boost candidate for response-driven adaptive RT.« less

  16. Clinical implementation of AXB from AAA for breast: Plan quality and subvolume analysis.

    PubMed

    Guebert, Alexandra; Conroy, Leigh; Weppler, Sarah; Alghamdi, Majed; Conway, Jessica; Harper, Lindsay; Phan, Tien; Olivotto, Ivo A; Smith, Wendy L; Quirk, Sarah

    2018-05-01

    Two dose calculation algorithms are available in Varian Eclipse software: Anisotropic Analytical Algorithm (AAA) and Acuros External Beam (AXB). Many Varian Eclipse-based centers have access to AXB; however, a thorough understanding of how it will affect plan characteristics and, subsequently, clinical practice is necessary prior to implementation. We characterized the difference in breast plan quality between AXB and AAA for dissemination to clinicians during implementation. Locoregional irradiation plans were created with AAA for 30 breast cancer patients with a prescription dose of 50 Gy to the breast and 45 Gy to the regional node, in 25 fractions. The internal mammary chain (IMC CTV ) nodes were covered by 80% of the breast dose. AXB, both dose-to-water and dose-to-medium reporting, was used to recalculate plans while maintaining constant monitor units. Target coverage and organ-at-risk doses were compared between the two algorithms using dose-volume parameters. An analysis to assess location-specific changes was performed by dividing the breast into nine subvolumes in the superior-inferior and left-right directions. There were minimal differences found between the AXB and AAA calculated plans. The median difference between AXB and AAA for breast CTV V 95% , was <2.5%. For IMC CTV , the median differences V 95% , and V 80% were <5% and 0%, respectively; indicating IMC CTV coverage only decreased when marginally covered. Mean superficial dose increased by a median of 3.2 Gy. In the subvolume analysis, the medial subvolumes were "hotter" when recalculated with AXB and the lateral subvolumes "cooler" with AXB; however, all differences were within 2 Gy. We observed minimal difference in magnitude and spatial distribution of dose when comparing the two algorithms. The largest observable differences occurred in superficial dose regions. Therefore, clinical implementation of AXB from AAA for breast radiotherapy is not expected to result in changes in clinical practice for prescribing or planning breast radiotherapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Lung nodule detection from CT scans using 3D convolutional neural networks without candidate selection

    NASA Astrophysics Data System (ADS)

    Jenuwine, Natalia M.; Mahesh, Sunny N.; Furst, Jacob D.; Raicu, Daniela S.

    2018-02-01

    Early detection of lung nodules from CT scans is key to improving lung cancer treatment, but poses a significant challenge for radiologists due to the high throughput required of them. Computer-Aided Detection (CADe) systems aim to automatically detect these nodules with computer algorithms, thus improving diagnosis. These systems typically use a candidate selection step, which identifies all objects that resemble nodules, followed by a machine learning classifier which separates true nodules from false positives. We create a CADe system that uses a 3D convolutional neural network (CNN) to detect nodules in CT scans without a candidate selection step. Using data from the LIDC database, we train a 3D CNN to analyze subvolumes from anywhere within a CT scan and output the probability that each subvolume contains a nodule. Once trained, we apply our CNN to detect nodules from entire scans, by systematically dividing the scan into overlapping subvolumes which we input into the CNN to obtain the corresponding probabilities. By enabling our network to process an entire scan, we expect to streamline the detection process while maintaining its effectiveness. Our results imply that with continued training using an iterative training scheme, the one-step approach has the potential to be highly effective.

  18. Overlap of highly FDG-avid and FMISO hypoxic tumor subvolumes in patients with head and neck cancer.

    PubMed

    Mönnich, David; Thorwarth, Daniela; Leibfarth, Sara; Pfannenberg, Christina; Reischl, Gerald; Mauz, Paul-Stefan; Nikolaou, Konstantin; la Fougère, Christian; Zips, Daniel; Welz, Stefan

    2017-11-01

    PET imaging may be used to personalize radiotherapy (RT) by identifying radioresistant tumor subvolumes for RT dose escalation. Using the tracers [ 18 F]-fluorodeoxyglucose (FDG) and [ 18 F]-fluoromisonidazole (FMISO), different aspects of tumor biology can be visualized. FDG depicts various biological aspects, e.g., proliferation, glycolysis and hypoxia, while FMISO is more hypoxia specific. In this study, we analyzed size and overlap of volumes based on the two markers for head-and-neck cancer patients (HNSCC). Twenty five HNSCC patients underwent a CT scan, as well as FDG and dynamic FMISO PET/CT prior to definitive radio-chemotherapy in a prospective FMISO dose escalation study. Three PET-based subvolumes of the primary tumor (GTV prim ) were segmented: a highly FDG-avid volume V FDG , a hypoxic volume on the static FMISO image acquired four hours post tracer injection (V H ) and a retention/perfusion volume (V M ) using pharmacokinetic modeling of dynamic FMISO data. Absolute volumes, overlaps and distances to agreement (DTA) were evaluated. Sizes of PET-based volumes and the GTV prim are significantly different (GTV prim >V FDG >V H >V M ; p < .05). V H is covered by V FDG or DTAs are small (mean coverage 74.4%, mean DTA 1.4 mm). Coverage of V M is less pronounced. With respect to V FDG and V H , the mean coverage is 48.7% and 43.1% and the mean DTA is 5.3 mm and 6.3 mm, respectively. For two patients, DTAs were larger than 2 cm. Hypoxic subvolumes from static PET imaging are typically covered by or in close proximity to highly FDG-avid subvolumes. Therefore, dose escalation to FDG positive subvolumes should cover the static hypoxic subvolumes in most patients, with the disadvantage of larger volumes, resulting in a higher risk of dose-limiting toxicity. Coverage of subvolumes from dynamic FMISO PET is less pronounced. Further studies are needed to explore the relevance of mismatches in functional imaging.

  19. The multinomial simulation algorithm for discrete stochastic simulation of reaction-diffusion systems.

    PubMed

    Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R

    2009-03-07

    The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.

  20. DCE-MRI defined subvolumes of a brain metastatic lesion by principle component analysis and fuzzy-c-means clustering for response assessment of radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farjam, Reza; Tsien, Christina I.; Lawrence, Theodore S.

    Purpose: To develop a pharmacokinetic modelfree framework to analyze the dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data for assessment of response of brain metastases to radiation therapy. Methods: Twenty patients with 45 analyzable brain metastases had MRI scans prior to whole brain radiation therapy (WBRT) and at the end of the 2-week therapy. The volumetric DCE images covering the whole brain were acquired on a 3T scanner with approximately 5 s temporal resolution and a total scan time of about 3 min. DCE curves from all voxels of the 45 brain metastases were normalized and then temporally aligned. Amore » DCE matrix that is constructed from the aligned DCE curves of all voxels of the 45 lesions obtained prior to WBRT is processed by principal component analysis to generate the principal components (PCs). Then, the projection coefficient maps prior to and at the end of WBRT are created for each lesion. Next, a pattern recognition technique, based upon fuzzy-c-means clustering, is used to delineate the tumor subvolumes relating to the value of the significant projection coefficients. The relationship between changes in different tumor subvolumes and treatment response was evaluated to differentiate responsive from stable and progressive tumors. Performance of the PC-defined tumor subvolume was also evaluated by receiver operating characteristic (ROC) analysis in prediction of nonresponsive lesions and compared with physiological-defined tumor subvolumes. Results: The projection coefficient maps of the first three PCs contain almost all response-related information in DCE curves of brain metastases. The first projection coefficient, related to the area under DCE curves, is the major component to determine response while the third one has a complimentary role. In ROC analysis, the area under curve of 0.88 ± 0.05 and 0.86 ± 0.06 were achieved for the PC-defined and physiological-defined tumor subvolume in response assessment. Conclusions: The PC-defined subvolume of a brain metastasis could predict tumor response to therapy similar to the physiological-defined one, while the former is determined more rapidly for clinical decision-making support.« less

  1. DCE-MRI defined subvolumes of a brain metastatic lesion by principle component analysis and fuzzy-c-means clustering for response assessment of radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farjam, Reza; Tsien, Christina I.; Lawrence, Theodore S.

    2014-01-15

    Purpose: To develop a pharmacokinetic modelfree framework to analyze the dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data for assessment of response of brain metastases to radiation therapy. Methods: Twenty patients with 45 analyzable brain metastases had MRI scans prior to whole brain radiation therapy (WBRT) and at the end of the 2-week therapy. The volumetric DCE images covering the whole brain were acquired on a 3T scanner with approximately 5 s temporal resolution and a total scan time of about 3 min. DCE curves from all voxels of the 45 brain metastases were normalized and then temporally aligned. Amore » DCE matrix that is constructed from the aligned DCE curves of all voxels of the 45 lesions obtained prior to WBRT is processed by principal component analysis to generate the principal components (PCs). Then, the projection coefficient maps prior to and at the end of WBRT are created for each lesion. Next, a pattern recognition technique, based upon fuzzy-c-means clustering, is used to delineate the tumor subvolumes relating to the value of the significant projection coefficients. The relationship between changes in different tumor subvolumes and treatment response was evaluated to differentiate responsive from stable and progressive tumors. Performance of the PC-defined tumor subvolume was also evaluated by receiver operating characteristic (ROC) analysis in prediction of nonresponsive lesions and compared with physiological-defined tumor subvolumes. Results: The projection coefficient maps of the first three PCs contain almost all response-related information in DCE curves of brain metastases. The first projection coefficient, related to the area under DCE curves, is the major component to determine response while the third one has a complimentary role. In ROC analysis, the area under curve of 0.88 ± 0.05 and 0.86 ± 0.06 were achieved for the PC-defined and physiological-defined tumor subvolume in response assessment. Conclusions: The PC-defined subvolume of a brain metastasis could predict tumor response to therapy similar to the physiological-defined one, while the former is determined more rapidly for clinical decision-making support.« less

  2. A novel concept for tumour targeting with radiation: Inverse dose-painting or targeting the "Low Drug Uptake Volume".

    PubMed

    Yaromina, Ala; Granzier, Marlies; Biemans, Rianne; Lieuwes, Natasja; van Elmpt, Wouter; Shakirin, Georgy; Dubois, Ludwig; Lambin, Philippe

    2017-09-01

    We tested a novel treatment approach combining (1) targeting radioresistant hypoxic tumour cells with the hypoxia-activated prodrug TH-302 and (2) inverse radiation dose-painting to boost selectively non-hypoxic tumour sub-volumes having no/low drug uptake. 18 F-HX4 hypoxia tracer uptake measured with a clinical PET/CT scanner was used as a surrogate of TH-302 activity in rhabdomyosarcomas growing in immunocompetent rats. Low or high drug uptake volume (LDUV/HDUV) was defined as 40% of the GTV with the lowest or highest 18 F-HX4 uptake, respectively. Two hours post TH-302/saline administration, animals received either single dose radiotherapy (RT) uniformly (15 or 18.5Gy) or a dose-painted non-uniform radiation (15Gy) with 50% higher dose to LDUV or HDUV (18.5Gy). Treatment plans were created using Eclipse treatment planning system and radiation was delivered using VMAT. Tumour response was quantified as time to reach 3 times starting tumour volume. Non-uniform RT boosting tumour sub-volume with low TH-302 uptake (LDUV) was superior to the same dose escalation to HDUV (p<0.0001) and uniform RT with the same mean dose 15Gy (p=0.0077). Noteworthy, dose escalation to LDUV required on average 3.5Gy lower dose to the GTV to achieve similar tumour response as uniform dose escalation. The results support targeted dose escalation to non-hypoxic tumour sub-volume with no/low activity of hypoxia-activated prodrugs. This strategy applies on average a lower radiation dose and is as effective as uniform dose escalation to the entire tumour. It could be applied to other type of drugs provided that their distribution can be imaged. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  3. An augmented parametric response map with consideration of image registration error: towards guidance of locally adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene

    2014-11-01

    Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.

  4. WE-FG-202-04: Decomposition of FDG-PET Based Differential Uptake Volume Histograms in Rectal Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, J; Vuong, T; Tomic, N

    2016-06-15

    Purpose: The goal of this study is to test the possible use of the analytical decomposition of differential uptake volume histograms (dUVHs) obtained from FDG-PET/CT data to isolate sub-volumes within a tumor known as biological target volumes (BTVs). Methods: : A retrospective study was conducted on a cohort of 20 histo-pathologically confirmed rectal adenocarcinoma patients having PET/CT scans for staging. All patients (T3N0) underwent pre-operative endorectal brachytherapy. After surgery, patients were restaged: 10 patients were T0N0 and 10 were restaged as remaining T3N0. The extent of the disease was sampled in order to create dUVHs; subsequently decomposed into the fewestmore » number of analytical Gaussian functions. Results: With the assumption that each function fit corresponded to a single sub-volume within the tumor, six sub-volumes were found to consistently emerge. The first two sub-volumes were influenced by contouring and were not considered in the analysis. For the T3N0 population, abundances for volumes V3-V6 were 63.6%±11.3%, 25.7%±8.4%, 6.1%±4.9%, and 4.7%±2.6%. For the T0N0 population, they were 50.2%±6.8%, 33.4%±4.3%, 11.8%±7.6%, and 4.7%±2.4%. The two populations were compared using two tailed T-tests: volumes 3 and 4 were statistically different with p values of 0.021 and 0.056 respectively. V6 was located at 8.63 ± 2.2 for T0N0 and 6.14 ± 0.78 for T3N0 group (p=0.016). Conclusion: We described a method for dUVH decomposition using FDG-PET images of rectal adenocarcinoma patients that subsequently went for pre-operative brachytherapy. In addition to extracting different sub-volumes corresponding to different FDG uptake levels, we observed different abundances of two sub-volumes as well as positions of the maximum uptake between the two patient groups. In addition to opening the door to further investigation into underlying physiological phenotypes of segmented subvolumes and their use for biological radiotherapy treatment planning, this method may also provide parameters that could correlate to clinical outcomes in radiotherapy patients.« less

  5. Automatic elastic image registration by interpolation of 3D rotations and translations from discrete rigid-body transformations.

    PubMed

    Walimbe, Vivek; Shekhar, Raj

    2006-12-01

    We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.

  6. A Technology for Developing Instructional Materials. Vol. 3, Handbook. Part B, Collect and Analyze Data About Criterion Behaviors.

    ERIC Educational Resources Information Center

    Gropper, George L.

    This document is the second in a series of 11 subvolumes of a handbook providing training for educational research and development personnel in the development of instructional materials. This subvolume deals with the task of collecting and analyzing data about criterion behavior. The document content is divided into the following five steps for…

  7. Dose escalation to high-risk sub-volumes based on non-invasive imaging of hypoxia and glycolytic activity in canine solid tumors: a feasibility study

    PubMed Central

    2013-01-01

    Introduction Glycolytic activity and hypoxia are associated with poor prognosis and radiation resistance. Including both the tumor uptake of 2-deoxy-2-[18 F]-fluorodeoxyglucose (FDG) and the proposed hypoxia tracer copper(II)diacetyl-bis(N4)-methylsemithio-carbazone (Cu-ATSM) in targeted therapy planning may therefore lead to improved tumor control. In this study we analyzed the overlap between sub-volumes of FDG and hypoxia assessed by the uptake of 64Cu-ATSM in canine solid tumors, and evaluated the possibilities for dose redistribution within the gross tumor volume (GTV). Materials and methods Positron emission tomography/computed tomography (PET/CT) scans of five spontaneous canine solid tumors were included. FDG-PET/CT was obtained at day 1, 64Cu-ATSM at day 2 and 3 (3 and 24 h pi.). GTV was delineated and CT images were co-registered. Sub-volumes for 3 h and 24 h 64Cu-ATSM (Cu3 and Cu24) were defined by a threshold based method. FDG sub-volumes were delineated at 40% (FDG40) and 50% (FDG50) of SUVmax. The size of sub-volumes, intersection and biological target volume (BTV) were measured in a treatment planning software. By varying the average dose prescription to the tumor from 66 to 85 Gy, the possible dose boost (D B ) was calculated for the three scenarios that the optimal target for the boost was one, the union or the intersection of the FDG and 64Cu-ATSM sub-volumes. Results The potential boost volumes represented a fairly large fraction of the total GTV: Cu3 49.8% (26.8-72.5%), Cu24 28.1% (2.4-54.3%), FDG40 45.2% (10.1-75.2%), and FDG50 32.5% (2.6-68.1%). A BTV including the union (∪) of Cu3 and FDG would involve boosting to a larger fraction of the GTV, in the case of Cu3∪FDG40 63.5% (51.8-83.8) and Cu3∪FDG50 48.1% (43.7-80.8). The union allowed only a very limited D B whereas the intersection allowed a substantial dose escalation. Conclusions FDG and 64Cu-ATSM sub-volumes were only partly overlapping, suggesting that the tracers offer complementing information on tumor physiology. Targeting the combined PET positive volume (BTV) for dose escalation within the GTV results in a limited D B . This suggests a more refined dose redistribution based on a weighted combination of the PET tracers in order to obtain an improved tumor control. PMID:24199939

  8. Should regional ventilation function be considered during radiation treatment planning to prevent radiation-induced complications?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, Fujun; Jeudy, Jean; D’Souza, Warren

    Purpose: To investigate the incorporation of pretherapy regional ventilation function in predicting radiation fibrosis (RF) in stage III nonsmall cell lung cancer (NSCLC) patients treated with concurrent thoracic chemoradiotherapy. Methods: Thirty-seven patients with stage III NSCLC were retrospectively studied. Patients received one cycle of cisplatin–gemcitabine, followed by two to three cycles of cisplatin–etoposide concurrently with involved-field thoracic radiotherapy (46–66 Gy; 2 Gy/fraction). Pretherapy regional ventilation images of the lung were derived from 4D computed tomography via a density change–based algorithm with mass correction. In addition to the conventional dose–volume metrics (V{sub 20}, V{sub 30}, V{sub 40}, and mean lung dose),more » dose–function metrics (fV{sub 20}, fV{sub 30}, fV{sub 40}, and functional mean lung dose) were generated by combining regional ventilation and radiation dose. A new class of metrics was derived and referred to as dose–subvolume metrics (sV{sub 20}, sV{sub 30}, sV{sub 40}, and subvolume mean lung dose); these were defined as the conventional dose–volume metrics computed on the functional lung. Area under the receiver operating characteristic curve (AUC) values and logistic regression analyses were used to evaluate these metrics in predicting hallmark characteristics of RF (lung consolidation, volume loss, and airway dilation). Results: AUC values for the dose–volume metrics in predicting lung consolidation, volume loss, and airway dilation were 0.65–0.69, 0.57–0.70, and 0.69–0.76, respectively. The respective ranges for dose–function metrics were 0.63–0.66, 0.61–0.71, and 0.72–0.80 and for dose–subvolume metrics were 0.50–0.65, 0.65–0.75, and 0.73–0.85. Using an AUC value = 0.70 as cutoff value suggested that at least one of each type of metrics (dose–volume, dose–function, dose–subvolume) was predictive for volume loss and airway dilation, whereas lung consolidation cannot be accurately predicted by any of the metrics. Logistic regression analyses showed that dose–function and dose–subvolume metrics were significant (P values ≤ 0.02) in predicting volume airway dilation. Likelihood ratio test showed that when combining dose–function and/or dose–subvolume metrics with dose–volume metrics, the achieved improvements of prediction accuracy on volume loss and airway dilation were significant (P values ≤ 0.04). Conclusions: The authors’ results demonstrated that the inclusion of regional ventilation function improved accuracy in predicting RF. In particular, dose–subvolume metrics provided a promising method for preventing radiation-induced pulmonary complications.« less

  9. Apparatus for blending small particles

    DOEpatents

    Bradley, R.A.; Reese, C.R.; Sease, J.D.

    1975-08-26

    An apparatus is described for blending small particles and uniformly loading the blended particles in a receptacle. Measured volumes of various particles are simultaneously fed into a funnel to accomplish radial blending and then directed onto the apex of a conical splitter which collects the blended particles in a multiplicity of equal subvolumes. Thereafter the apparatus sequentially discharges the subvolumes for loading in a receptacle. A system for blending nuclear fuel particles and loading them into fuel rod molds is described in a preferred embodiment. (auth)

  10. Integrated Structural/Acoustic Modeling of Heterogeneous Panels

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett, A.; Aboudi, Jacob; Arnold, Steven, M.; Pennline, James, A.

    2012-01-01

    A model for the dynamic response of heterogeneous media is presented. A given medium is discretized into a number of subvolumes, each of which may contain an elastic anisotropic material, void, or fluid, and time-dependent boundary conditions are applied to simulate impact or incident pressure waves. The full time-dependent displacement and stress response throughout the medium is then determined via an explicit solution procedure. The model is applied to simulate the coupled structural/acoustic response of foam core sandwich panels as well as aluminum panels with foam inserts. Emphasis is placed on the acoustic absorption performance of the panels versus weight and the effects of the arrangement of the materials and incident wave frequency.

  11. Precision of DVC approaches for strain analysis in bone imaged with μCT at different dimensional levels.

    NASA Astrophysics Data System (ADS)

    Dall'Ara, Enrico; Peña-Fernández, Marta; Palanca, Marco; Giorgi, Mario; Cristofolini, Luca; Tozzi, Gianluca

    2017-11-01

    Accurate measurement of local strain in heterogeneous and anisotropic bone tissue is fundamental to understand the pathophysiology of musculoskeletal diseases, to evaluate the effect of interventions from preclinical studies, and to optimize the design and delivery of biomaterials. Digital volume correlation (DVC) can be used to measure the three-dimensional displacement and strain fields from micro-Computed Tomography (µCT) images of loaded specimens. However, this approach is affected by the quality of the input images, by the morphology and density of the tissue under investigation, by the correlation scheme, and by the operational parameters used in the computation. Therefore, for each application the precision of the method should be evaluated. In this paper we present the results collected from datasets analyzed in previous studies as well as new data from a recent experimental campaign for characterizing the relationship between the precision of two different DVC approaches and the spatial resolution of the outputs. Different bone structures scanned with laboratory source µCT or Synchrotron light µCT (SRµCT) were processed in zero-strain tests to evaluate the precision of the DVC methods as a function of the subvolume size that ranged from 8 to 2500 micrometers. The results confirmed that for every microstructure the precision of DVC improves for larger subvolume size, following power laws. However, for the first time large differences in the precision of both local and global DVC approaches have been highlighted when SRµCT or in vivo µCT images were used instead of conventional ex vivo µCT. These findings suggest that in situ mechanical testing protocols applied in SRµCT facilities should be optimized in order to allow DVC analyses of localized strain measurements. Moreover, for in vivo µCT applications DVC analyses should be performed only with relatively course spatial resolution for achieving a reasonable precision of the method. In conclusion, we have extensively shown that the precision of both tested DVC approaches is affected by different bone structures, different input image resolution and different subvolume sizes. Before each specific application DVC users should always apply a similar approach to find the best compromise between precision and spatial resolution of the measurements.

  12. Progressive Failure Analysis of Composite Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Yarrington, Phillip W.; Collier, Craig S.; Arnold, Steven M.

    2006-01-01

    A new progressive failure analysis capability for stiffened composite panels has been developed based on the combination of the HyperSizer stiffened panel design/analysis/optimization software with the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC). MAC/GMC discretizes a composite material s microstructure into a number of subvolumes and solves for the stress and strain state in each while providing the homogenized composite properties as well. As a result, local failure criteria may be employed to predict local subvolume failure and the effects of these local failures on the overall composite response. When combined with HyperSizer, MAC/GMC is employed to represent the ply level composite material response within the laminates that constitute a stiffened panel. The effects of local subvolume failures can then be tracked as loading on the stiffened panel progresses. Sample progressive failure results are presented at both the composite laminate and the composite stiffened panel levels. Deformation and failure model predictions are compared with experimental data from the World Wide Failure Exercise for AS4/3501-6 graphite/epoxy laminates.

  13. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios Velazquez, E; Meier, R; Dunn, W

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showedmore » high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.« less

  14. Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks

    PubMed Central

    Kreshuk, Anna; Koethe, Ullrich; Pax, Elizabeth; Bock, Davi D.; Hamprecht, Fred A.

    2014-01-01

    We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem). PMID:24516550

  15. Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization

    NASA Astrophysics Data System (ADS)

    Mills, G.; Willson, C. S.

    2015-12-01

    Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.

  16. Li2O (LiOLi)

    NASA Astrophysics Data System (ADS)

    Guelachvili, G.

    This document is part of Subvolume B `Linear Triatomic Molecules', Part 9, of Volume 20 `Molecular Constants mostly from Infrared Spectroscopy' of Landolt-Börnstein Group II `Molecules and Radicals'.

  17. Automatic detection of lung vessel bifurcation in thoracic CT images

    NASA Astrophysics Data System (ADS)

    Maduskar, Pragnya; Vikal, Siddharth; Devarakota, Pandu

    2011-03-01

    Computer-aided diagnosis (CAD) systems for detection of lung nodules have been an active topic of research for last few years. It is desirable that a CAD system should generate very low false positives (FPs) while maintaining high sensitivity. This work aims to reduce the number of false positives occurring at vessel bifurcation point. FPs occur quite frequently on vessel branching point due to its shape which can appear locally spherical due to the intrinsic geometry of intersecting tubular vessel structures combined with partial volume effects and soft tissue attenuation appearance surrounded by parenchyma. We propose a model-based technique for detection of vessel branching points using skeletonization, followed by branch-point analysis. First we perform vessel structure enhancement using a multi-scale Hessian filter to accurately segment tubular structures of various sizes followed by thresholding to get binary vessel structure segmentation [6]. A modified Reebgraph [7] is applied next to extract the critical points of structure and these are joined by a nearest neighbor criterion to obtain complete skeletal model of vessel structure. Finally, the skeletal model is traversed to identify branch points, and extract metrics including individual branch length, number of branches and angle between various branches. Results on 80 sub-volumes consisting of 60 actual vessel-branching and 20 solitary solid nodules show that the algorithm identified correctly vessel branching points for 57 sub-volumes (95% sensitivity) and misclassified 2 nodules as vessel branch. Thus, this technique has potential in explicit identification of vessel branching points for general vessel analysis, and could be useful in false positive reduction in a lung CAD system.

  18. Sfg

    NASA Astrophysics Data System (ADS)

    Fischer, R. X.; Baur, W. H.

    This document is part of Subvolume E `Zeolite-Type Crystal Structures and their Chemistry. Framework Type Codes RON to STI' of Volume 14 `Microporous and other Framework Materials with Zeolite-Type Structures' of Landolt-Börnstein Group IV `Physical Chemistry'.

  19. Glossary

    NASA Astrophysics Data System (ADS)

    Bernhardt, J. H.; Kasch, K.-U.; Kaul, A.; Kramer, H.-M.; Noßke, D.; Valentin, J.

    This document is part of Subvolume A 'Fundamentals and Data in Radiobiology, Radiation Biophysics, Dosimetry and Medical Radiological Protection' of Volume 7 'Medical Radiological Physics' of Landolt-Börnstein - Group VIII 'Advanced Materials and Technologies'. It contains the Glossary.

  20. Vet

    NASA Astrophysics Data System (ADS)

    Fischer, R. X.; Baur, W. H.

    This document is part of Subvolume F 'Zeolite-Type Crystal Structures and their Chemistry. Framework Type Codes STO to ZON' of Volume 14 'Microporous and other Framework Materials with Zeolite-Type Structures' of Landolt-Börnstein Group IV 'Physical Chemistry'.

  1. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    PubMed

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  2. Towards the Irving-Kirkwood limit of the mechanical stress tensor

    NASA Astrophysics Data System (ADS)

    Smith, E. R.; Heyes, D. M.; Dini, D.

    2017-06-01

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V =ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ =1.0 , a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.

  3. Towards the Irving-Kirkwood limit of the mechanical stress tensor.

    PubMed

    Smith, E R; Heyes, D M; Dini, D

    2017-06-14

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ 3 . Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.

  4. Large area 3-D optical coherence tomography imaging of lumpectomy specimens for radiation treatment planning

    NASA Astrophysics Data System (ADS)

    Wang, Cuihuan; Kim, Leonard; Barnard, Nicola; Khan, Atif; Pierce, Mark C.

    2016-02-01

    Our long term goal is to develop a high-resolution imaging method for comprehensive assessment of tissue removed during lumpectomy procedures. By identifying regions of high-grade disease within the excised specimen, we aim to develop patient-specific post-operative radiation treatment regimens. We have assembled a benchtop spectral-domain optical coherence tomography (SD-OCT) system with 1320 nm center wavelength. Automated beam scanning enables "sub-volumes" spanning 5 mm x 5 mm x 2 mm (500 A-lines x 500 B-scans x 2 mm in depth) to be collected in under 15 seconds. A motorized sample positioning stage enables multiple sub-volumes to be acquired across an entire tissue specimen. Sub-volumes are rendered from individual B-scans in 3D Slicer software and en face (XY) images are extracted at specific depths. These images are then tiled together using MosaicJ software to produce a large area en face view (up to 40 mm x 25 mm). After OCT imaging, specimens were sectioned and stained with HE, allowing comparison between OCT image features and disease markers on histopathology. This manuscript describes the technical aspects of image acquisition and reconstruction, and reports initial qualitative comparison between large area en face OCT images and HE stained tissue sections. Future goals include developing image reconstruction algorithms for mapping an entire sample, and registering OCT image volumes with clinical CT and MRI images for post-operative treatment planning.

  5. Spatial resolution and measurement uncertainty of strains in bone and bone-cement interface using digital volume correlation.

    PubMed

    Zhu, Ming-Liang; Zhang, Qing-Hang; Lupton, Colin; Tong, Jie

    2016-04-01

    The measurement uncertainty of strains has been assessed in a bone analogue (sawbone), bovine trabecular bone and bone-cement interface specimens under zero load using the Digital Volume Correlation (DVC) method. The effects of sub-volume size, sample constraint and preload on the measured strain uncertainty have been examined. There is generally a trade-off between the measurement uncertainty and the spatial resolution. Suitable sub-volume sizes have been be selected based on a compromise between the measurement uncertainty and the spatial resolution of the cases considered. A ratio of sub-volume size to a microstructure characteristic (Tb.Sp) was introduced to reflect a suitable spatial resolution, and the measurement uncertainty associated was assessed. Specifically, ratios between 1.6 and 4 appear to give rise to standard deviations in the measured strains between 166 and 620 με in all the cases considered, which would seem to suffice for strain analysis in pre as well as post yield loading regimes. A microscale finite element (μFE) model was built from the CT images of the sawbone, and the results from the μFE model and a continuum FE model were compared with those from the DVC. The strain results were found to differ significantly between the two methods at tissue level, consistent in trend with the results found in human bones, indicating mainly a limitation of the current DVC method in mapping strains at this level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Towards the Irving-Kirkwood limit of the mechanical stress tensor

    PubMed Central

    Heyes, D. M.; Dini, D.

    2017-01-01

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems. PMID:29166053

  7. Identification of interfaces involved in weak interactions with application to F-actin-aldolase rafts.

    PubMed

    Hu, Guiqing; Taylor, Dianne W; Liu, Jun; Taylor, Kenneth A

    2018-03-01

    Macromolecular interactions occur with widely varying affinities. Strong interactions form well defined interfaces but weak interactions are more dynamic and variable. Weak interactions can collectively lead to large structures such as microvilli via cooperativity and are often the precursors of much stronger interactions, e.g. the initial actin-myosin interaction during muscle contraction. Electron tomography combined with subvolume alignment and classification is an ideal method for the study of weak interactions because a 3-D image is obtained for the individual interactions, which subsequently are characterized collectively. Here we describe a method to characterize heterogeneous F-actin-aldolase interactions in 2-D rafts using electron tomography. By forming separate averages of the two constituents and fitting an atomic structure to each average, together with the alignment information which relates the raw motif to the average, an atomic model of each crosslink is determined and a frequency map of contact residues is computed. The approach should be applicable to any large structure composed of constituents that interact weakly and heterogeneously. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. ClF3 Chlorine trifluoride

    NASA Astrophysics Data System (ADS)

    Vogt, J.

    This document is part of Part 3 of Subvolume D `Asymmetric Top Molecules' of Volume 29 `Molecular Constants Mostly from Microwave, Molecular Beam, and Sub-Doppler Laser Spectroscopy' of Landolt-Börnstein - Group II `Molecules and Radicals'.

  9. KMo12S14

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Melnichenko-Koblyuk, N.; Pavlyuk, O.; Savysyuk, I.; Stoyko, S.; Sysa, L.

    This document is part of Subvolume A6 `Structure Types. Part 6: Space Groups (166) R-3m - (160) R3m' of Volume 43 `Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III `Condensed Matter'.

  10. NaGa3Te5

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Kuprysyuk, V.; Savysyuk, I.

    This document is part of Subvolume A8 `Structure Types. Part 8: Space Groups (156) P3m1 - (148) R-3' of Volume 43 `Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III `Condensed Matter'.

  11. SU-F-J-218: Predicting Radiation-Induced Xerostomia by Dosimetrically Accounting for Daily Setup Uncertainty During Head and Neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Quon, H; McNutt, T

    2016-06-15

    Purpose: To determine if the accumulated parotid dosimetry using planning CT to daily CBCT deformation and dose re-calculation can predict for radiation-induced xerostomia. Methods: To track and dosimetrically account for the effects of anatomical changes on the parotid glands, we propagated physicians’ contours from planning CT to daily CBCT using a deformable registration with iterative CBCT intensity correction. A surface mesh for each OAR was created with the deformation applied to the mesh to obtain the deformed parotid volumes. Daily dose was computed on the deformed CT and accumulated to the last fraction. For both the accumulated and the plannedmore » parotid dosimetry, we tested the prediction power of different dosimetric parameters including D90, D50, D10, mean, standard deviation, min/max dose to the combined parotids and patient age to severe xerostomia (NCI-CTCAE grade≥2 at 6 mo follow-up). We also tested the dosimetry to parotid sub-volumes. Three classification algorithms, random tree, support vector machine, and logistic regression were tested to predict severe xerostomia using a leave-one-out validation approach. Results: We tested our prediction model on 35 HN IMRT cases. Parameters from the accumulated dosimetry model demonstrated an 89% accuracy for predicting severe xerostomia. Compared to the planning dosimetry, the accumulated dose consistently demonstrated higher prediction power with all three classification algorithms, including 11%, 5% and 30% higher accuracy, sensitivity and specificity, respectively. Geometric division of the combined parotid glands into superior-inferior regions demonstrated ∼5% increased accuracy than the whole volume. The most influential ranked features include age, mean accumulated dose of the submandibular glands and the accumulated D90 of the superior parotid glands. Conclusion: We demonstrated that the accumulated parotid dosimetry using CT-CBCT registration and dose re-calculation more accurately predicts for severe xerostomia and that the superior portion of the parotid glands may be particularly important in predicting for severe xerostomia. This work was supported in part by NIH/NCI under grant R42CA137886 and in part by Toshiba big data research project funds.« less

  12. Pd3MnH0.61

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Kuprysyuk, V.; Savysyuk, I.; Zaremba, R.

    This document is part of Subvolume A10 'Structure Types. Part 10: Space Groups (140) I4/mcm - (136) P42/mnm' of Volume 43 'Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III 'Condensed Matter'.

  13. KMo4O6 form II

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Kuprysyuk, V.; Savysyuk, I.; Zaremba, R.

    This document is part of Subvolume A11 'Structure Types. Part 11: Space Groups (135) P42/mbc - (123) P4/mmm' of Volume 43 'Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III 'Condensed Matter'.

  14. (H3O)2TiF6

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Kuprysyuk, V.; Savysyuk, I.

    This document is part of Subvolume A8 `Structure Types. Part 8: Space Groups (156) P3m1 - (148) R-3' of Volume 43 `Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III `Condensed Matter'.

  15. Second virial coefficient of starch

    NASA Astrophysics Data System (ADS)

    Wohlfarth, Ch.

    This document is part of Subvolume D2 'Polymer Solutions - Physical Properties and their Relations I (Thermodynamic Properties: PVT -Data and miscellaneous Properties of polymer Solutions) of Volume 6 `Polymers' of Landolt-Börnstein - Group VIII `Advanced Materials and Technologies'.

  16. 17.9.3 Radical cations of diazo compounds

    NASA Astrophysics Data System (ADS)

    Davies, A. G.

    This document is part of Subvolume E2 `Phosphorus-Centered Radicals, Radicals Centered on Other Heteroatoms, Organic Radical Ions' of Volume 26 `Magnetic Properties of Free Radicals' of Landolt-Börnstein Group II `Molecules and Radicals'.

  17. Multiaxial Cyclic Thermoplasticity Analysis with Besseling's Subvolume Method

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1983-01-01

    A modification was formulated to Besseling's Subvolume Method to allow it to use multilinear stress-strain curves which are temperature dependent to perform cyclic thermoplasticity analyses. This method automotically reproduces certain aspects of real material behavior important in the analysis of Aircraft Gas Turbine Engine (AGTE) components. These include the Bauschinger effect, cross-hardening, and memory. This constitutive equation was implemented in a finite element computer program called CYANIDE. Subsequently, classical time dependent plasticity (creep) was added to the program. Since its inception, this program was assessed against laboratory and component testing and engine experience. The ability of this program to simulate AGTE material response characteristics was verified by this experience and its utility in providing data for life analyses was demonstrated. In this area of life analysis, the multiaxial thermoplasticity capabilities of the method have proved a match for the actual AGTE life experience.

  18. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  19. C2H4F2 1,2-Difluoroethane

    NASA Astrophysics Data System (ADS)

    Demaison, J.

    This document is part of Part 1 of Subvolume D 'Asymmetric Top Molecules' of Volume 29 'Molecular Constants Mostly from Microwave, Molecular Beam, and Sub-Doppler Laser Spectroscopy' of Landolt-Börnstein - Group II 'Molecules and Radicals'.

  20. NaSi4O8(OH)•4H2O

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Kuprysyuk, V.; Savysyuk, I.

    This document is part of Subvolume A9 `Structure Types. Part 9: Space Groups (148) R-3 - (141) I41' of Volume 43 `Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III `Condensed Matter'.

  1. Second virial coefficient of hydroxypropyl starch

    NASA Astrophysics Data System (ADS)

    Wohlfarth, Ch.

    This document is part of Subvolume D2 'Polymer Solutions - Physical Properties and their Relations I (Thermodynamic Properties: PVT -Data and miscellaneous Properties of polymer Solutions) of Volume 6 `Polymers' of Landolt-Börnstein - Group VIII `Advanced Materials and Technologies'.

  2. WE-AB-202-02: Incorporating Regional Ventilation Function in Predicting Radiation Fibrosis After Concurrent Chemoradiotherapy for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, F; Jeudy, J; Tseng, H

    Purpose: To investigate the incorporation of pre-therapy regional ventilation function in predicting radiation fibrosis (RF) in stage III non-small-cell lung cancer (NSCLC) patients treated with concurrent thoracic chemoradiotherapy. Methods: 37 stage III NSCLC patients were retrospectively studied. Patients received one cycle of cisplatin-gemcitabine, followed by two to three cycles of cisplatin-etoposide concurrently with involved-field thoracic radiotherapy between 46 and 66 Gy (2 Gy per fraction). Pre-therapy regional ventilation images of the lung were derived from 4DCT via a density-change-based image registration algorithm with mass correction. RF was evaluated at 6-months post-treatment using radiographic scoring based on airway dilation and volumemore » loss. Three types of ipsilateral lung metrics were studied: (1) conventional dose-volume metrics (V20, V30, V40, and mean-lung-dose (MLD)), (2) dose-function metrics (fV20, fV30, fV40, and functional mean-lung-dose (fMLD) generated by combining regional ventilation and dose), and (3) dose-subvolume metrics (sV20, sV30, sV40, and subvolume mean-lung-dose (sMLD) defined as the dose-volume metrics computed on the sub-volume of the lung with at least 60% of the quantified maximum ventilation status). Receiver operating characteristic (ROC) curve analysis and logistic regression analysis were used to evaluate the predictability of these metrics for RF. Results: In predicting airway dilation, the area under the ROC curve (AUC) values for (V20, MLD), (fV20, fMLD), and (sV20, and sMLD) were (0.76, 0.70), (0.80, 0.74) and (0.82, 0.80), respectively. The logistic regression p-values were (0.09, 0.18), (0.02, 0.05) and (0.004, 0.006), respectively. With regard to volume loss, the corresponding AUC values for these metrics were (0.66, 0.57), (0.67, 0.61) and (0.71, 0.69), and p-values were (0.95, 0.90), (0.43, 0.64) and (0.08, 0.12), respectively. Conclusion: The inclusion of regional ventilation function improved predictability of radiation fibrosis. Dose-subvolume metrics provided a promising method for incorporating functional information into the conventional dose-volume parameters for outcome assessment.« less

  3. Nuclear magnetic resonance data of C36H30Br2OSb2

    NASA Astrophysics Data System (ADS)

    Mikhova, B. M.

    This document is part of Part 6 `Organic Metalloid Compounds' of Subvolume D 'Chemical Shifts and Coupling Constants for Carbon-13' of Landolt-Börnstein III/35 'Nuclear Magnetic Resonance Data', Group III 'Condensed Matter'.

  4. Nuclear magnetic resonance data of C36H30Cl2OSb2

    NASA Astrophysics Data System (ADS)

    Mikhova, B. M.

    This document is part of Part 6 `Organic Metalloid Compounds' of Subvolume D 'Chemical Shifts and Coupling Constants for Carbon-13' of Landolt-Börnstein III/35 'Nuclear Magnetic Resonance Data', Group III 'Condensed Matter'.

  5. Rb3TlBr6•1.14H2O

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Kuprysyuk, V.; Savysyuk, I.; Zaremba, R.

    This document is part of Subvolume A10 'Structure Types. Part 10: Space Groups (140) I4/mcm - (136) P42/mnm' of Volume 43 'Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III 'Condensed Matter'.

  6. NaGa[TeO3]2[H2O]2.4

    NASA Astrophysics Data System (ADS)

    Villars, P.; Cenzual, K.; Daams, J.; Gladyshevskii, R.; Shcherban, O.; Dubenskyy, V.; Melnichenko-Koblyuk, N.; Pavlyuk, O.; Savysyuk, I.; Stoyko, S.; Sysa, L.

    This document is part of Subvolume A6 `Structure Types. Part 6: Space Groups (166) R-3m - (160) R3m' of Volume 43 `Crystal Structures of Inorganic Compounds' of Landolt-Börnstein - Group III `Condensed Matter'.

  7. Constitutive law for seismicity rate based on rate and state friction: Dieterich 1994 revisited.

    NASA Astrophysics Data System (ADS)

    Heimisson, E. R.; Segall, P.

    2017-12-01

    Dieterich [1994] derived a constitutive law for seismicity rate based on rate and state friction, which has been applied widely to aftershocks, earthquake triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the model is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity rate is not constant under constant stressing rate is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and rate of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing rate conditions and for arbitrary stressing history. This theory can be used to model seismicity rate due to stress changes or to estimate stress changes using observed seismicity from triggered earthquake swarms where earthquake interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the model when applied to observations of triggered seismicity. This approach has application to understanding and modeling induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous modeling of geodetic and seismic data in a self-consistent framework. To date physics-based modeling of seismicity with or without geodetic data has been found to give insight into various processes related to aftershocks, VT and injection-induced seismicity. However, the role of various processes such as earthquake interactions and magnitudes and effective normal stress has been unclear. The new theory presented resolves some of the pertinent issues raised in the literature with application of the Dieterich 1994 model.

  8. FDG-PET-based differential uptake volume histograms: a possible approach towards definition of biological target volumes.

    PubMed

    Devic, Slobodan; Mohammed, Huriyyah; Tomic, Nada; Aldelaijan, Saad; De Blois, François; Seuntjens, Jan; Lehnert, Shirley; Faria, Sergio

    2016-06-01

    Integration of fluorine-18 fludeoxyglucose ((18)F-FDG)-positron emission tomography (PET) functional data into conventional anatomically based gross tumour volume delineation may lead to optimization of dose to biological target volumes (BTV) in radiotherapy. We describe a method for defining tumour subvolumes using (18)F-FDG-PET data, based on the decomposition of differential uptake volume histograms (dUVHs). For 27 patients with histopathologically proven non-small-cell lung carcinoma (NSCLC), background uptake values were sampled within the healthy lung contralateral to a tumour in those image slices containing tumour and then scaled by the ratio of mass densities between the healthy lung and tumour. Signal-to-background (S/B) uptake values within volumes of interest encompassing the tumour were used to reconstruct the dUVHs. These were subsequently decomposed into the minimum number of analytical functions (in the form of differential uptake values as a function of S/B) that yielded acceptable net fits, as assessed by χ(2) values. Six subvolumes consistently emerged from the fitted dUVHs over the sampled volume of interest on PET images. Based on the assumption that each function used to decompose the dUVH may correspond to a single subvolume, the intersection between the two adjacent functions could be interpreted as a threshold value that differentiates them. Assuming that the first two subvolumes spread over the tumour boundary, we concentrated on four subvolumes with the highest uptake values, and their S/B thresholds [mean ± standard deviation (SD)] were 2.88 ± 0.98, 4.05 ± 1.55, 5.48 ± 2.06 and 7.34 ± 2.89 for adenocarcinoma, 3.01 ± 0.71, 4.40 ± 0.91, 5.99 ± 1.31 and 8.17 ± 2.42 for large-cell carcinoma and 4.54 ± 2.11, 6.46 ± 2.43, 8.87 ± 5.37 and 12.11 ± 7.28 for squamous cell carcinoma, respectively. (18)F-FDG-based PET data may potentially be used to identify BTV within the tumour in patients with NSCLC. Using the one-way analysis of variance statistical tests, we found a significant difference among all threshold levels among adenocarcinomas, large-cell carcinoma and squamous cell carcinomas. On the other hand, the observed significant variability in threshold values throughout the patient cohort (expressed as large SDs) can be explained as a consequence of differences in the physiological status of the tumour volume for each patient at the time of the PET/CT scan. This further suggests that patient-specific threshold values for the definition of BTVs could be determined by creation and curve fitting of dUVHs on a patient-by-patient basis. The method of (18)F-FDG-PET-based dUVH decomposition described in this work may lead to BTV segmentation in tumours.

  9. C2H4ArF2 1,2-Difluoroethane - argon (1/1)

    NASA Astrophysics Data System (ADS)

    Demaison, J.

    This document is part of Part 1 of Subvolume D 'Asymmetric Top Molecules' of Volume 29 'Molecular Constants Mostly from Microwave, Molecular Beam, and Sub-Doppler Laser Spectroscopy' of Landolt-Börnstein - Group II 'Molecules and Radicals'.

  10. C2H4ArF2 1,1-Difluoroethane - argon (1/1)

    NASA Astrophysics Data System (ADS)

    Demaison, J.

    This document is part of Part 1 of Subvolume D 'Asymmetric Top Molecules' of Volume 29 'Molecular Constants Mostly from Microwave, Molecular Beam, and Sub-Doppler Laser Spectroscopy' of Landolt-Börnstein - Group II 'Molecules and Radicals'.

  11. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    NASA Astrophysics Data System (ADS)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  12. PuS: Reflectivity

    NASA Astrophysics Data System (ADS)

    Troć, R.

    This document is part of subvolume B6bβ`Actinide Monochalcogenides' of Volume 27 `Magnetic properties of non-metallic inorganic compounds based on transition elements' of Landolt-Börnstein - Group III `Condensed Matter'. The volume presents magnetic and related properties of monochalcogenides based on actinides and their solid solutions.

  13. 11.2 Solar Neutrinos

    NASA Astrophysics Data System (ADS)

    Nakahata, Masayuki

    This document is part of Subvolume A `Theory and Experiments' of Volume 21 `Elementary Particles' of Landolt-Börnstein - Group I `Elementary Particles, Nuclei and Atoms'. It contains of the Chapter `11 Experimental Results on Neutrino Masses and Mixings' the Section `11.2 Solar Neutrinos' with the content:

  14. PuS: Thermoelectric Power

    NASA Astrophysics Data System (ADS)

    Troć, R.

    This document is part of subvolume B6bβ`Actinide Monochalcogenides' of Volume 27 `Magnetic properties of non-metallic inorganic compounds based on transition elements' of Landolt-Börnstein - Group III `Condensed Matter'. The volume presents magnetic and related properties of monochalcogenides based on actinides and their solid solutions.

  15. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  16. 14 CFR 1203.501 - Applying derivative classification markings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... INFORMATION SECURITY PROGRAM Derivative Classification § 1203.501 Applying derivative classification markings... classification decisions: (b) Verify the information's current level of classification so far as practicable...

  17. 14 CFR 1203.501 - Applying derivative classification markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... INFORMATION SECURITY PROGRAM Derivative Classification § 1203.501 Applying derivative classification markings... classification decisions: (b) Verify the information's current level of classification so far as practicable...

  18. O-Pu-U (Oxygen-Plutonium-Uranium)

    NASA Astrophysics Data System (ADS)

    Materials Science International Team MSIT

    This document is part of Subvolume C4 'Non-Ferrous Metal Systems. Part 4: Selected Nuclear Materials and Engineering Systems' of Volume 11 'Ternary Alloy Systems - Phase Diagrams, Crystallographic and Thermodynamic Data critically evaluated by MSIT®' of Landolt-Börnstein - Group IV 'Physical Chemistry'. It provides data of the ternary system Oxygen-Plutonium-Uranium.

  19. Volume fractions of DCE-MRI parameter as early predictor of histologic response in soft tissue sarcoma: A feasibility study.

    PubMed

    Xia, Wei; Yan, Zhuangzhi; Gao, Xin

    2017-10-01

    To find early predictors of histologic response in soft tissue sarcoma through volume transfer constant (K trans ) analysis based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). 11 Patients with soft tissue sarcoma of the lower extremity that underwent preoperative chemoradiotherapy followed by limb salvage surgery were included in this retrospective study. For each patient, DCE-MRI data sets were collected before and two weeks after therapy initiation, and histologic tumor cell necrosis rate (TCNR) was reported at surgery. The DCE-MRI volumes were aligned by registration. Then, the aligned volumes were used to obtain the K trans variation map. Accordingly, three sub-volumes (with increased, decreased or unchanged K trans ) were defined and identified, and fractions of the sub-volumes, denoted as F + , F - and F 0 , respectively, were calculated. The predictive ability of volume fractions was determined by using area under a receiver operating characteristic curve (AUC). Linear regression analysis was performed to investigate the relationship between TCNR and volume fractions. In addition, the K trans values of the sub-volumes were compared. The AUC for F - (0.896) and F 0 (0.833) were larger than that for change of tumor longest diameter ΔD (0.625) and the change of mean K trans ΔK trans ¯ (0.792). Moreover, the regression results indicated that TCNR was directly proportional to F 0 (R 2 =0.75, P=0.0003), while it was inversely proportional to F - (R 2 =0.77, P=0.0002). However, TCNR had relatively weak linear relationship with ΔK trans ¯ (R 2 =0.64, P=0.0018). Additionally, TCNR did not have linear relationship with DD (R 2 =0.16, P=0.1246). The volume fraction F - and F 0 have potential as early predictors of soft tissue sarcoma histologic response. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Zheng, X; Liu, H

    Purpose: This study is to evaluate the feasibility of simultaneously integrated boost (SIB) to hypoxic subvolume (HTV) in nasopharyngeal carcinomas under the guidance of 18F-Fluoromisonidazole (FMISO) PET/CT using a novel non-uniform volumetric modulated arc therapy (VMAT)technique. Methods: Eight nasopharyngeal carcinoma patients treated with conventional uniform VMAT were retrospectively analyzed. For each treatment, actual conventional uniform VMAT plan with two or more arcs (2–2.5 arcs, totally rotating angle < 1000o) was designed with dose boost to hopxic subvolume (total dose, 84Gy) in the gross tumor volme (GTV) under the guidance of 18F- FMISO PET/CT. Based on the same dataset, experimental singlemore » arc non-uniform VAMT plans were generated with the same dose prescription using customized software tools. Dosimetric parameters, quality assurance and the efficiency of the treatment delivery were compared between the uniform and non-uniform VMAT plans. Results: To develop the non-uniform VMAT technique, a specific optimization model was successfully established. Both techniques generate high-quality plans with pass rate (>98%) with the 3mm, 3% criterion. HTV received dose of 84.1±0.75Gy and 84.1±1.2Gy from uniform and non-uniform VMAT plans, respectively. In terms of target coverage and dose homogeneity, there was no significant statistical difference between actual and experimental plans for each case. However, for critical organs at risk (OAR), including the parotids, oral cavity and larynx, dosimetric difference was significant with better dose sparing form experimental plans. Regarding plan implementation efficiency, the average machine time was 3.5 minutes for the actual VMAT plans and 3.7 minutes for the experimental nonuniform VMAT plans (p>0.050). Conclusion: Compared to conventional VMAT technique, the proposed non-uniform VMAT technique has the potential to produce efficient and safe treatment plans, especially in cases with complicated anatomical structures and demanding dose boost to subvolumes.« less

  1. Potential implications of the bystander effect on TCP and EUD when considering target volume dose heterogeneity.

    PubMed

    Balderson, Michael J; Kirkby, Charles

    2015-01-01

    In light of in vitro evidence suggesting that radiation-induced bystander effects may enhance non-local cell killing, there is potential for impact on radiotherapy treatment planning paradigms such as the goal of delivering a uniform dose throughout the clinical target volume (CTV). This work applies a bystander effect model to calculate equivalent uniform dose (EUD) and tumor control probability (TCP) for external beam prostate treatment and compares the results with a more common model where local response is dictated exclusively by local absorbed dose. The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. EUD and TCP of a prostate cancer target volume under conditions of increasing dose heterogeneity were calculated using two models: One incorporating bystander effects derived from previously published in vitro bystander data ( McMahon et al. 2012 , 2013a); and one using a common linear-quadratic (LQ) response that relies exclusively on local absorbed dose. Dose through the CTV was modelled as a normal distribution, where the degree of heterogeneity was then dictated by changing the standard deviation (SD). Also, a representative clinical dose distribution was examined as cold (low dose) sub-volumes were systematically introduced. The bystander model suggests a moderate degree of dose heterogeneity throughout a target volume will yield as good or better outcome compared to a uniform dose in terms of EUD and TCP. For a typical intermediate risk prostate prescription of 78 Gy over 39 fractions maxima in EUD and TCP as a function of increasing SD occurred at SD ∼ 5 Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. Small, but potentially significant differences in the outcome metrics between the models were identified in the clinically-derived dose distribution as cold sub-volumes were introduced. In terms of EUD and TCP, the bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV varies. The results suggest, at least in a limiting sense, the potential for allowing some degree of dose heterogeneity within a CTV, although further investigation of the assumptions of the bystander model are warranted.

  2. Positron Emission Tomography for Pre-Clinical Sub-Volume Dose Escalation

    NASA Astrophysics Data System (ADS)

    Bass, Christopher Paul

    Purpose: This dissertation focuses on establishment of pre-clinical methods facilitating the use of PET imaging for selective sub-volume dose escalation. Specifically the problems addressed are 1.) The difficulties associated with comparing multiple PET images, 2.) The need for further validation of novel PET tracers before their implementation in dose escalation schema and 3.) The lack of concrete pre-clinical data supporting the use of PET images for guidance of selective sub-volume dose escalations. Methods and materials: In order to compare multiple PET images the confounding effects of mispositioning and anatomical change between imaging sessions needed to be alleviated. To mitigate the effects of these sources of error, deformable image registration was employed. A deformable registration algorithm was selected and the registration error was evaluated via the introduction of external fiducials to the tumor. Once a method for image registration was established, a procedure for validating the use of novel PET tracers with FDG was developed. Nude mice were used to perform in-vivo comparisons of the spatial distributions of two PET tracers, FDG and FLT. The spatial distributions were also compared across two separate tumor lines to determine the effects of tumor morphology on spatial distribution. Finally, the research establishes a method for acquiring pre-clinical data supporting the use of PET for image-guidance in selective dose escalation. Nude mice were imaged using only FDG PET/CT and the resulting images were used to plan PET-guided dose escalations to a 5 mm sub-volume within the tumor that contained the highest PET tracer uptake. These plans were then delivered using the Small Animal Radiation Research Platform (SARRP) and the efficacy of the PET-guided plans was observed. Results and Conclusions: The analysis of deformable registration algorithms revealed that the BRAINSFit B-spline deformable registration algorithm available in SLICER3D was capable of registering small animal PET/CT data sets in less than 5 minutes with an average registration error of .3 mm. The methods used in chapter 3 allowed for the comparison of the spatial distributions of multiple PET tracers imaged at different times. A comparison of FDG and FLT showed that both are positively correlated but that tumor morphology does significantly affect the correlation between the two tracers. An overlap analysis of the high intensity PET regions of FDG and FLT showed that FLT offers additional spatial information to that seen with FDG. In chapter 4 the SARRP allowed for the delivery of planned PET-guided selective dose escalations to a pre-clinical tumor model. This will facilitate future research validating the use of PET for clinical selective dose escalation.

  3. A Brownian dynamics study on ferrofluid colloidal dispersions using an iterative constraint method to satisfy Maxwell’s equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubina, Sean Hyun, E-mail: sdubin2@uic.edu; Wedgewood, Lewis Edward, E-mail: wedge@uic.edu

    2016-07-15

    Ferrofluids are often favored for their ability to be remotely positioned via external magnetic fields. The behavior of particles in ferromagnetic clusters under uniformly applied magnetic fields has been computationally simulated using the Brownian dynamics, Stokesian dynamics, and Monte Carlo methods. However, few methods have been established that effectively handle the basic principles of magnetic materials, namely, Maxwell’s equations. An iterative constraint method was developed to satisfy Maxwell’s equations when a uniform magnetic field is imposed on ferrofluids in a heterogeneous Brownian dynamics simulation that examines the impact of ferromagnetic clusters in a mesoscale particle collection. This was accomplished bymore » allowing a particulate system in a simple shear flow to advance by a time step under a uniformly applied magnetic field, then adjusting the ferroparticles via an iterative constraint method applied over sub-volume length scales until Maxwell’s equations were satisfied. The resultant ferrofluid model with constraints demonstrates that the magnetoviscosity contribution is not as substantial when compared to homogeneous simulations that assume the material’s magnetism is a direct response to the external magnetic field. This was detected across varying intensities of particle-particle interaction, Brownian motion, and shear flow. Ferroparticle aggregation was still extensively present but less so than typically observed.« less

  4. Volumetric Properties of the Mixture Pentafluoroethane C2HF5 + C2H4F2 1,1-Difluoroethane (LB1530, VMSD1541)

    NASA Astrophysics Data System (ADS)

    Cibulka, I.; Hnědkovský, L.; Fontaine, J.-C.; Sosnkowska-Kehiaian, K.; Kehiaian, H. V.

    This document is part of Subvolume A `Binary Liquid Systems of Nonelectrolytes' of Volume 23 `Volumetric Properties of Mixtures and Solutions' of Landolt-Börnstein Group IV `Physical Chemistry'. It corresponds to the data set LB1530 of the ELBT database.

  5. Diffusion of pent-1-ene (1); air (2)

    NASA Astrophysics Data System (ADS)

    Winkelmann, J.

    This document is part of Subvolume A `Gases in Gases, Liquids and their Mixtures' of Volume 15 `Diffusion in Gases, Liquids and Electrolytes' of Landolt-Börnstein Group IV `Physical Chemistry'. It is part of the chapter of the chapter `Diffusion in Pure Gases' and contains data on diffusion of (1) pent-1-ene; (2) air

  6. Energy levels for Ac-212 (Actinium-212)

    NASA Astrophysics Data System (ADS)

    Sukhoruchkin, S. I.; Soroko, Z. N.

    This document is part of Subvolume C `Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei' of Volume 19 `Nuclear States from Charged Particle Reactions' of Landolt-Börnstein - Group I `Elementary Particles, Nuclei and Atoms'. It provides energy levels for atomic nuclei of the isotope Ac-212 (actinium, atomic number Z = 89, mass number A = 212).

  7. NQRS Data for H4INO3 (Subst. No. 2276)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for H4INO3 (Subst. No. 2276)

  8. NQRS Data for D4INO4 (Subst. No. 2163)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for D4INO4 (Subst. No. 2163)

  9. NQRS Data for H4INO4 (Subst. No. 2277)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for H4INO4 (Subst. No. 2277)

  10. Assessment of an Optical Flow Field-Based Polyp Detector for CT Colonography

    DTIC Science & Technology

    2001-10-25

    sort true polyps from false positives based on features extracted from the computed OFFs. II. METHODOLOGY A. Pre-processing The 3D CT data was...subvolume and scrolling direction, as follows:      = ∑ Z ZD y)x,( Smoothy )x,( vv (2) The smoothing filter used is a 3×3 rectangular

  11. Table 137. H2 16O (H16OH): Line position and intensity for rotational transitions (060)-(060) in the range 757-934 cm-1

    NASA Astrophysics Data System (ADS)

    Guelachvili, G.; Picqué, N.

    This document is part of Subvolume C 'Non-linear Triatomic Molecules', Part 1 'H2O (HOH)', Part α'H2 16O (H16OH)' of Volume 20 'Molecular Constants Mostly from Infrared Spectroscopy' of Landolt-Börnstein - Group II 'Molecules and Radicals'.

  12. Automated segmentation and reconstruction of patient-specific cardiac anatomy and pathology from in vivo MRI*

    NASA Astrophysics Data System (ADS)

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey

    2012-12-01

    This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.

  13. Localised task-dependent motor-unit recruitment in the masseter.

    PubMed

    Schindler, H J; Hellmann, D; Giannakopoulos, N N; Eiglsperger, U; van Dijk, J P; Lapatki, B G

    2014-07-01

    Localised motor-unit (MU) recruitment in the masseter was analysed in this study. We investigated whether differential activation behaviour, which has already been reported for distant masseter regions, can also be detected in small muscle subvolumes at the level of single MUs. Two bipolar fine-wire electrodes and an intra-oral 3D bite-force transmitter were used to record intra-muscular electromyograms (EMG) resulting from controlled bite-forces of 10 healthy human subjects (mean age 24.1 ± 1.2 years). Two-hundred and seventeen decomposed MUs were organised into localised MU task groups with different (P < 0.001) force-direction-specific behaviour. Proportions of MUs involved in one, two, three or four examined tasks were 46%, 31%, 18% and 5%, respectively. This study provides evidence of the ability of the neuromuscular system to modify the mechanical output of small masseter subvolumes by differential control of adjacent MUs belonging to distinct task groups. Localised differential activation behaviour of the masseter may be the crucial factor enabling highly flexible and efficient adjustment of the muscle activity in response to complex local biomechanical needs, for example, continually varying bite-forces during the demanding masticatory process. © 2014 John Wiley & Sons Ltd.

  14. NQRS Data for C24H20BRb (Subst. No. 1578)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C24H20BRb (Subst. No. 1578)

  15. Diffusion of 4-methyl-pent-3-en-2-one (1); air (2)

    NASA Astrophysics Data System (ADS)

    Winkelmann, J.

    This document is part of Subvolume A `Gases in Gases, Liquids and their Mixtures' of Volume 15 `Diffusion in Gases, Liquids and Electrolytes' of Landolt-Börnstein Group IV `Physical Chemistry'. It is part of the chapter of the chapter `Diffusion in Pure Gases' and contains data on diffusion of (1) 4-methyl-pent-3-en-2-one; (2) air

  16. NQRS Data for CoLa4LiO8(Subst. No. 1970)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CoLa4LiO8 (Subst. No. 1970)

  17. Nqrs Data for C2H5NO2 (Subst. No. 0554)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for C2H5NO2 (Subst. No. 0554)

  18. NQRS Data for Ga3LaPd2 (Subst. No. 2229)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for Ga3LaPd2 (Subst. No. 2229)

  19. NQRS Data for AlDO28Si13 (Subst. No. 0033)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for AlDO28Si13 (Subst. No. 0033)

  20. NQRS Data for AlDO70Si34 (Subst. No. 0035)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for AlDO70Si34 (Subst. No. 0035)

  1. NQRS Data for AlDO28Si13 (Subst. No. 0034)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for AlDO28Si13 (Subst. No. 0034)

  2. NQRS Data for AlDO2 [Al(OD)O] (Subst. No. 0032)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for AlDO2 [Al(OD)O] (Subst. No. 0032)

  3. Hypofractionated Image-Guided IMRT in Advanced Pancreatic Cancer With Simultaneous Integrated Boost to Infiltrated Vessels Concomitant With Capecitabine: A Phase I Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passoni, Paolo, E-mail: passoni.paolo@hsr.it; Reni, Michele; Cattaneo, Giovanni M.

    2013-12-01

    Purpose: To determine the maximum tolerated radiation dose (MTD) of an integrated boost to the tumor subvolume infiltrating vessels, delivered simultaneously with radical dose to the whole tumor and concomitant capecitabine in patients with pretreated advanced pancreatic adenocarcinoma. Methods and Materials: Patients with stage III or IV pancreatic adenocarcinoma without progressive disease after induction chemotherapy were eligible. Patients underwent simulated contrast-enhanced four-dimensional computed tomography and fluorodeoxyglucose-labeled positron emission tomography. Gross tumor volume 1 (GTV1), the tumor, and GTV2, the tumor subvolume 1 cm around the infiltrated vessels, were contoured. GTVs were fused to generate Internal Target Volume (ITV)1 and ITV2.more » Biological tumor volume (BTV) was fused with ITV1 to create the BTV+Internal Target Volume (ITV) 1. A margin of 5/5/7 mm (7 mm in cranium-caudal) was added to BTV+ITV1 and to ITV2 to create Planning Target Volume (PTV) 1 and PTV2, respectively. Radiation therapy was delivered with tomotherapy. PTV1 received a fixed dose of 44.25 Gy in 15 fractions, and PTV2 received a dose escalation from 48 to 58 Gy as simultaneous integrated boost (SIB) in consecutive groups of at least 3 patients. Concomitant chemotherapy was capecitabine, 1250 mg/m{sup 2} daily. Dose-limiting toxicity (DLT) was defined as any treatment-related G3 nonhematological or G4 hematological toxicity occurring during the treatment or within 90 days from its completion. Results: From June 2005 to February 2010, 25 patients were enrolled. The dose escalation on the SIB was stopped at 58 Gy without reaching the MTD. One patient in the 2{sup nd} dose level (50 Gy) had a DLT: G3 acute gastric ulcer. Three patients had G3 late adverse effects associated with gastric and/or duodenal mucosal damage. All patients received the planned dose of radiation. Conclusions: A dose of 44.25 Gy in 15 fractions to the whole tumor with an SIB of 58 Gy to small tumor subvolumes concomitant with capecitabine is feasible in chemotherapy-pretreated patients with advanced pancreatic cancer.« less

  4. A novel geometry-dosimetry label fusion method in multi-atlas segmentation for radiotherapy: a proof-of-concept study

    NASA Astrophysics Data System (ADS)

    Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B.

    2017-05-01

    Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.

  5. A novel geometry-dosimetry label fusion method in multi-atlas segmentation for radiotherapy: a proof-of-concept study.

    PubMed

    Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B

    2017-05-07

    Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.

  6. NQRS Data for C7H8Cl3Osb (Subst. No. 1005)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for C7H8Cl3OSb (Subst. No. 1005)

  7. Nqrs Data for C24H36Cu2N6 (Subst. No. 1586)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C24H36Cu2N6 (Subst. No. 1586)

  8. NQRS Data for CaCu2La2O6(Subst. No. 1756)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CaCu2La2O6 (Subst. No. 1756)

  9. Nqrs Data for C24H42Li2N4 (Subst. No. 1587)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C24H42Li2N4 (Subst. No. 1587)

  10. Nqrs Data for C26H38N2O3 (Subst. No. 1607)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C26H38N2O3 (Subst. No. 1607)

  11. Diffusion of cis-5,8,11,14,17-eicosapentaenoic acid (1); carbon dioxide (2)

    NASA Astrophysics Data System (ADS)

    Winkelmann, J.

    This document is part of Subvolume A `Gases in Gases, Liquids and their Mixtures' of Volume 15 `Diffusion in Gases, Liquids and Electrolytes' of Landolt-Börnstein Group IV `Physical Chemistry'. It is part of the chapter of the chapter `Diffusion in Pure Gases' and contains data on diffusion of (1) cis-5,8,11,14,17-eicosapentaenoic acid; (2) carbon dioxide

  12. Nqrs Data for Ca9Cu24O41Sr5(Subst. No. 1776)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for Ca9Cu24O41Sr5 (Subst. No. 1776)

  13. Nqrs Data for C24H24Cu2N6 (Subst. No. 1584)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C24H24Cu2N6 (Subst. No. 1584)

  14. A mathematical approach towards simulating a realistic tissue activity curve of 64Cu-ATSM for the purpose of sub-target volume delineation in radiotherapy

    NASA Astrophysics Data System (ADS)

    Dalah, E.; Bradley, D.; Nisbet, A.

    2010-07-01

    One unique feature of positron emission tomography (PET) is that it allows measurements of regional tracer concentration in hypoxic tumour-bearing tissue, supporting the need for accurate radiotherapy treatment planning. Generally the data are taken over multiple time frames, in the form of tissue activity curves (TACs), providing an indication of the presence of hypoxia, the degree of oxygen perfusion, vascular geometry and hypoxia fraction. In order to understand such a complicated phenomenon a number of theoretical studies have attempted to describe tracer uptake in tissue cells. More recently, a novel computerized reaction diffusion equation method developed by Kelly and Brady has allowed simulation of the realistic TACs of 18F-FMISO, with representation of physiological oxygen heterogeneity and tracer kinetics. We present a refinement to the work of Kelly and Brady, with a particular interest in simulating TACs of the most promising hypoxia selective tracer, 64Cu-ATSM, demonstrating its potential role in tumour sub-volume delineation for radiotherapy treatment planning. Simulation results have demonstrated the high contrast of imaging using ATSM, with a tumour to blood ratio ranging 2.24-4.1. Similarly, results of tumour sub-volumes generated using three different thresholding methods were all well correlated.

  15. Analysis of dose heterogeneity using a subvolume-DVH

    NASA Astrophysics Data System (ADS)

    Said, M.; Nilsson, P.; Ceberg, C.

    2017-11-01

    The dose-volume histogram (DVH) is universally used in radiation therapy for its highly efficient way of summarizing three-dimensional dose distributions. An apparent limitation that is inherent to standard histograms is the loss of spatial information, e.g. it is no longer possible to tell where low- and high-dose regions are, and whether they are connected or disjoint. Two methods for overcoming the spatial fragmentation of low- and high-dose regions are presented, both based on the gray-level size zone matrix, which is a two-dimensional histogram describing the frequencies of connected regions of similar intensities. The first approach is a quantitative metric which can be likened to a homogeneity index. The large cold spot metric (LCS) is here defined to emphasize large contiguous regions receiving too low a dose; emphasis is put on both size, and deviation from the prescribed dose. In contrast, the subvolume-DVH (sDVH) is an extension to the standard DVH and allows for a qualitative evaluation of the degree of dose heterogeneity. The information retained from the two-dimensional histogram is overlaid on top of the DVH and the two are presented simultaneously. Both methods gauge the underlying heterogeneity in ways that the DVH alone cannot, and both have their own merits—the sDVH being more intuitive and the LCS being quantitative.

  16. Nqrs Data for C6H7F4N2OSb (Subst. No. 0879)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for C6H7F4N2OSb (Subst. No. 0879)

  17. Nqrs Data for C6H7F7N2OSb2 (Subst. No. 0880)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for C6H7F7N2OSb2 (Subst. No. 0880)

  18. Nqrs Data for C26H34N2O3V (Subst. No. 1601)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C26H34N2O3V (Subst. No. 1601)

  19. Nqrs Data for C26H35Br2CuP2 (Subst. No. 1603)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C26H35Br2CuP2 (Subst. No. 1603)

  20. Diffusion of cis-3-methyl-2-pent-2-enyl-cyclopent-2-enone (1); carbon dioxide (2)

    NASA Astrophysics Data System (ADS)

    Winkelmann, J.

    This document is part of Subvolume A `Gases in Gases, Liquids and their Mixtures' of Volume 15 `Diffusion in Gases, Liquids and Electrolytes' of Landolt-Börnstein Group IV `Physical Chemistry'. It is part of the chapter of the chapter `Diffusion in Pure Gases' and contains data on diffusion of (1) cis-3-methyl-2-pent-2-enyl-cyclopent-2-enone; (2) carbon dioxide

  1. NQRS Data for CoLa0.5O3Sr0.5(Subst. No. 1964)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CoLa0.5O3Sr0.5 (Subst. No. 1964)

  2. NQRS Data for CoLa0.75O3Sr0.25(Subst. No. 1967)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CoLa0.75O3Sr0.25 (Subst. No. 1967)

  3. NQRS Data for CoLa0.8O3Sr0.2(Subst. No. 1968)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CoLa0.8O3Sr0.2 (Subst. No. 1968)

  4. NQRS Data for CoLa0.7O3Sr0.3(Subst. No. 1966)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CoLa0.7O3Sr0.3 (Subst. No. 1966)

  5. NQRS Data for CoLa0.6O3Sr0.4(Subst. No. 1965)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for CoLa0.6O3Sr0.4 (Subst. No. 1965)

  6. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  7. WE-AB-207B-12: Prospective Study of the Relationship Between Dose-Volume Clinical Toxicity and Patient Reported Outcomes in Lung Cancer Patients Treated with SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayyas, E; Vance, S; Brown, S

    Purpose: To determine in a prospective study, the correlation between radiation dose/volume, clinical toxicities and patient-reported, quality of life (QOL) resulting from lung SBRT. Methods: For 106 non-small cell lung cancer (NSCLC) patients receiving SBRT (12 Gy × 4), symptoms including cough, dyspnea, fatigue and pneumonitis were measured at baseline (before treatment), after treatment and 3, 6, and 12 months post-treatment. Toxicity was graded from zero to five. Dosimetric parameters such as the MLD, D10%, D20%, and lung subvolumes (V10 and V20) were obtained from the treatment plan. Dosimetric parameters and number of patients demonstrating toxicity ≥ grade 2 weremore » tabulated. Linear regression analysis was used to calculate correlations between MLD and D10, D20, V10 and V20. Results: The percentages of patients with > grade 2 pneumonitis, fatigue, cough, and dyspnea over 3 to 12 months increased from 0.0% to 3.5%, 3.2% to 10.5%, 4.3% to 8.3%, and 10.8% to 18.8%, respectively. Computed dose indices D10%, D20% were 7.9±4.8 Gy and 3.0±2.3 Gy, respectively. MLD ranged from 0.34 Gy up to 9.9 Gy with overall average 3.0±1.7 Gy. The averages of the subvolumes V10 and V20 were respectively 8.9±5.3% and 3.0±2.4%. The linear regression analysis showed that V10 and D10 demonstrated the strongest correlation to MLD; R2= 0.92 and 0.87, respectively. V20, and D20 were also strongly correlated with MLD; R2 = 0.81 and 0.84 respectively. A correlation was also found to exist between MLD > 2 Gy and ≥ grade 2 cough and dyspnea. Subvolume values for 2Gy MLD were 5.3% for V10 and 2% for V20. Conclusion: Dosimetric indices: MLD ≥ 2Gy, D10 ≥ 5Gy and V10 ≥ 5% of the total lung volume were predictive of > grade 2 cough and dyspnea QOL data. The QOL results are a novel component of this work. acknowledgement of the Varian grant support.« less

  8. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...

  9. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...

  10. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...

  11. 22 CFR 9.4 - Original classification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the unauthorized...

  12. Atomic Mass and Nuclear Binding Energy for I-131 (Iodine)

    NASA Astrophysics Data System (ADS)

    Sukhoruchkin, S. I.; Soroko, Z. N.

    This document is part of the Supplement containing the complete sets of data of Subvolume A `Nuclei with Z = 1 - 54' of Volume 22 `Nuclear Binding Energies and Atomic Masses' of Landolt-Börnstein - Group I `Elementary Particles, Nuclei and Atoms'. It provides atomic mass, mass excess, nuclear binding energy, nucleon separation energies, Q-values, and nucleon residual interaction parameters for atomic nuclei of the isotope I-131 (Iodine, atomic number Z = 53, mass number A = 131).

  13. Atomic Mass and Nuclear Binding Energy for F-22 (Fluorine)

    NASA Astrophysics Data System (ADS)

    Sukhoruchkin, S. I.; Soroko, Z. N.

    This document is part of the Supplement containing the complete sets of data of Subvolume A `Nuclei with Z = 1 - 54' of Volume 22 `Nuclear Binding Energies and Atomic Masses' of Landolt-Börnstein - Group I `Elementary Particles, Nuclei and Atoms'. It provides atomic mass, mass excess, nuclear binding energy, nucleon separation energies, Q-values, and nucleon residual interaction parameters for atomic nuclei of the isotope F-22 (Fluorine, atomic number Z = 9, mass number A = 22).

  14. Nqrs Data for C24H76BLiN12O4P4 (Subst. No. 1593)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for C24H76BLiN12O4P4 (Subst. No. 1593)

  15. Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.

    PubMed

    Marquez-Lago, Tatiana T; Burrage, Kevin

    2007-09-14

    In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.

  16. Biological PET-guided adaptive radiotherapy for dose escalation in head and neck cancer: a systematic review.

    PubMed

    Hamming-Vrieze, Olga; Navran, Arash; Al-Mamgani, Abrahim; Vogel, Wouter V

    2018-06-04

    In recent years, the possibility of adapting radiotherapy to changes in biological tissue parameters has emerged. It is hypothesized that early identification of radio-resistant parts of the tumor during treatment provides the possibility to adjust the radiotherapy plan to improve outcome. The aim of this systematic literature review was to evaluate the current state of the art of biological PET-guided adaptive radiotherapy, focusing on dose escalation to radio-resistant tumor. A structured literature search was done to select clinical trials including patients with head and neck cancer of the oral cavity, oropharynx, hypopharynx or larynx, with a PET performed during treatment used to develop biological adaptive radiotherapy by i) delineation of sub-volumes suitable for adaptive re-planning, ii) in silico adaptive treatment planning or iii) treatment of patients with PET based dose escalated adaptive radiotherapy. Nineteen articles were selected, 12 articles analyzing molecular imaging signal during treatment and 7 articles focused on biological adaptive treatment planning, of which two were clinical trials. Studied biological pathways include metabolism (FDG), hypoxia (MISO, FAZA and HX4) and proliferation (FLT). In the development of biological dose adaptation in radiotherapy for head-neck tumors, many aspects of the procedure remain ambiguous. Patient selection, tracer selection for detection of the radio-resistant sub-volumes, timing of adaptive radiotherapy, workflow and treatment planning aspects are discussed in a clinical context.

  17. Lower permian reef-bank bodies’ characterization in the pre-caspian basin

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Wang, Yankun; Yin, Jiquan; Luo, Man; Liang, Shuang

    2018-02-01

    Reef-bank reservoir is one of the targets for exploration of marine carbonate rocks in the Pre-Caspian Basin. Within this basin, the reef-bank bodies were primarily developed in the subsalt Devonian-Lower Permian formations, and are dominated by carbonate platform interior and margin reef-banks. The Lower Permian reef-bank present in the eastern part of the basin is considered prospective. This article provides a sequence and sedimentary facies study utilizing drilling and other data, as well as an analysis and identification of the Lower Permian reef-bank features along the eastern margin of the Pre-Caspian Basin using sub-volume coherence and seismic inversion techniques. The results indicate that the sub-volume coherence technique gives a better reflection of lateral distribution of reefs, and the seismic inversion impedance enables the identification of reef bodies’ development phases in the vertical direction, since AI (impedance) is petrophysically considered a tool for distinguishing the reef limestone and the clastic rocks within the formation (limestone exhibits a relatively high impedance than clastic rock). With this method, the existence of multiple phases of the Lower Permian reef-bank bodies along the eastern margin of the Pre-Caspian Basin has been confirmed. These reef-bank bodies are considered good subsalt exploration targets due to their lateral connectivity from south to north, large distribution range and large scale.

  18. Variability sensitivity of dynamic texture based recognition in clinical CT data

    NASA Astrophysics Data System (ADS)

    Kwitt, Roland; Razzaque, Sharif; Lowell, Jeffrey; Aylward, Stephen

    2014-03-01

    Dynamic texture recognition using a database of template models has recently shown promising results for the task of localizing anatomical structures in Ultrasound video. In order to understand its clinical value, it is imperative to study the sensitivity with respect to inter-patient variability as well as sensitivity to acquisition parameters such as Ultrasound probe angle. Fully addressing patient and acquisition variability issues, however, would require a large database of clinical Ultrasound from many patients, acquired in a multitude of controlled conditions, e.g., using a tracked transducer. Since such data is not readily attainable, we advocate an alternative evaluation strategy using abdominal CT data as a surrogate. In this paper, we describe how to replicate Ultrasound variabilities by extracting subvolumes from CT and interpreting the image material as an ordered sequence of video frames. Utilizing this technique, and based on a database of abdominal CT from 45 patients, we report recognition results on an organ (kidney) recognition task, where we try to discriminate kidney subvolumes/videos from a collection of randomly sampled negative instances. We demonstrate that (1) dynamic texture recognition is relatively insensitive to inter-patient variation while (2) viewing angle variability needs to be accounted for in the template database. Since naively extending the template database to counteract variability issues can lead to impractical database sizes, we propose an alternative strategy based on automated identification of a small set of representative models.

  19. 19 CFR 152.16 - Judicial changes in classification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... OF THE TREASURY (CONTINUED) CLASSIFICATION AND APPRAISEMENT OF MERCHANDISE Classification § 152.16 Judicial changes in classification. The following procedures apply to changes in classification made by... 19 Customs Duties 2 2010-04-01 2010-04-01 false Judicial changes in classification. 152.16 Section...

  20. A multi-layer MRI description of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    La Rocca, M.; Amoroso, N.; Lella, E.; Bellotti, R.; Tangaro, S.

    2017-09-01

    Magnetic resonance imaging (MRI) along with complex network is currently one of the most widely adopted techniques for detection of structural changes in neurological diseases, such as Parkinson's Disease (PD). In this paper, we present a digital image processing study, within the multi-layer network framework, combining more classifiers to evaluate the informative power of the MRI features, for the discrimination of normal controls (NC) and PD subjects. We define a network for each MRI scan; the nodes are the sub-volumes (patches) the images are divided into and the links are defined using the Pearson's pairwise correlation between patches. We obtain a multi-layer network whose important network features, obtained with different feature selection methods, are used to feed a supervised multi-level random forest classifier which exploits this base of knowledge for accurate classification. Method evaluation has been carried out using T1 MRI scans of 354 individuals, including 177 PD subjects and 177 NC from the Parkinson's Progression Markers Initiative (PPMI) database. The experimental results demonstrate that the features obtained from multiplex networks are able to accurately describe PD patterns. Besides, also if a privileged scale for studying PD disease exists, exploring the informative content of more scales leads to a significant improvement of the performances in the discrimination between disease and healthy subjects. In particular, this method gives a comprehensive overview of brain regions statistically affected by the disease, an additional value to the presented study.

  1. Classification of wheat: Badhwar profile similarity technique

    NASA Technical Reports Server (NTRS)

    Austin, W. W.

    1980-01-01

    The Badwar profile similarity classification technique used successfully for classification of corn was applied to spring wheat classifications. The software programs and the procedures used to generate full-scene classifications are presented, and numerical results of the acreage estimations are given.

  2. Luminescence and related properties of nanocrystalline porous silicon

    NASA Astrophysics Data System (ADS)

    Koshida, N.

    This document is part of subvolume C3 'Optical Properties' of volume 34 'Semiconductor quantum structures' of Landolt-Börnstein, Group III, Condensed Matter, on the optical properties of quantum structures based on group IV semiconductors. It discusses luminescence and related properties of nanocrystalline porous silicon. Topics include an overview of nanostructured silicon, its fabrication technology, and properties of nanocrystalline porous silicon such as confinement effects, photoluminescence, electroluminesce, carrier charging effects, ballistic transport and emission, and thermally induced acoustic emission.

  3. Whole blood analysis rotor assembly having removable cellular sedimentation bowl

    DOEpatents

    Burtis, C.A.; Johnson, W.F.

    1975-08-26

    A rotor assembly for performing photometric analyses using whole blood samples is described. Following static loading of a gross blood sample within a centrally located, removable, cell sedimentation bowl, the red blood cells in the gross sample are centrifugally separated from the plasma, the plasm displaced from the sedimentation bowl, and measured subvolumes of plasma distributed to respective sample analysis cuvettes positioned in an annular array about the rotor periphery. Means for adding reagents to the respective cuvettes are also described. (auth)

  4. A material sensitivity study on the accuracy of deformable organ registration using linear biomechanical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y.; Liang, J.; Yan, D.

    2006-02-15

    Model-based deformable organ registration techniques using the finite element method (FEM) have recently been investigated intensively and applied to image-guided adaptive radiotherapy (IGART). These techniques assume that human organs are linearly elastic material, and their mechanical properties are predetermined. Unfortunately, the accurate measurement of the tissue material properties is challenging and the properties usually vary between patients. A common issue is therefore the achievable accuracy of the calculation due to the limited access to tissue elastic material constants. In this study, we performed a systematic investigation on this subject based on tissue biomechanics and computer simulations to establish the relationshipsmore » between achievable registration accuracy and tissue mechanical and organ geometrical properties. Primarily we focused on image registration for three organs: rectal wall, bladder wall, and prostate. The tissue anisotropy due to orientation preference in tissue fiber alignment is captured by using an orthotropic or a transversely isotropic elastic model. First we developed biomechanical models for the rectal wall, bladder wall, and prostate using simplified geometries and investigated the effect of varying material parameters on the resulting organ deformation. Then computer models based on patient image data were constructed, and image registrations were performed. The sensitivity of registration errors was studied by perturbating the tissue material properties from their mean values while fixing the boundary conditions. The simulation results demonstrated that registration error for a subvolume increases as its distance from the boundary increases. Also, a variable associated with material stability was found to be a dominant factor in registration accuracy in the context of material uncertainty. For hollow thin organs such as rectal walls and bladder walls, the registration errors are limited. Given 30% in material uncertainty, the registration error is limited to within 1.3 mm. For a solid organ such as the prostate, the registration errors are much larger. Given 30% in material uncertainty, the registration error can reach 4.5 mm. However, the registration error distribution for prostates shows that most of the subvolumes have a much smaller registration error. A deformable organ registration technique that uses FEM is a good candidate in IGART if the mean material parameters are available.« less

  5. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification.... (a) Original classification. At the time classified material is produced, the classifier shall apply...: (1) Classification authority. The name/personal identifier, and position title of the original...

  6. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification.... (a) Original classification. At the time classified material is produced, the classifier shall apply...: (1) Classification authority. The name/personal identifier, and position title of the original...

  7. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification.... (a) Original classification. At the time classified material is produced, the classifier shall apply...: (1) Classification authority. The name/personal identifier, and position title of the original...

  8. Hierarchical classification method and its application in shape representation

    NASA Astrophysics Data System (ADS)

    Ireton, M. A.; Oakley, John P.; Xydeas, Costas S.

    1992-04-01

    In this paper we describe a technique for performing shaped-based content retrieval of images from a large database. In order to be able to formulate such user-generated queries about visual objects, we have developed an hierarchical classification technique. This hierarchical classification technique enables similarity matching between objects, with the position in the hierarchy signifying the level of generality to be used in the query. The classification technique is unsupervised, robust, and general; it can be applied to any suitable parameter set. To establish the potential of this classifier for aiding visual querying, we have applied it to the classification of the 2-D outlines of leaves.

  9. A classification scheme for edge-localized modes based on their probability distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Max Planck Institute for Plasma Physics, D-85748 Garching; Hornung, G.

    We present here an automated classification scheme which is particularly well suited to scenarios where the parameters have significant uncertainties or are stochastic quantities. To this end, the parameters are modeled with probability distributions in a metric space and classification is conducted using the notion of nearest neighbors. The presented framework is then applied to the classification of type I and type III edge-localized modes (ELMs) from a set of carbon-wall plasmas at JET. This provides a fast, standardized classification of ELM types which is expected to significantly reduce the effort of ELM experts in identifying ELM types. Further, themore » classification scheme is general and can be applied to various other plasma phenomena as well.« less

  10. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  11. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  12. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  13. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  14. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  15. The method of planes pressure tensor for a spherical subvolume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heyes, D. M., E-mail: d.heyes@imperial.ac.uk; Smith, E. R., E-mail: edward.smith05@imperial.ac.uk; Dini, D., E-mail: d.dini@imperial.ac.uk

    2014-02-07

    Various formulas for the local pressure tensor based on a spherical subvolume of radius, R, are considered. An extension of the Method of Planes (MOP) formula of Todd et al. [Phys. Rev. E 52, 1627 (1995)] for a spherical geometry is derived using the recently proposed Control Volume formulation [E. R. Smith, D. M. Heyes, D. Dini, and T. A. Zaki, Phys. Rev. E 85, 056705 (2012)]. The MOP formula for the purely radial component of the pressure tensor is shown to be mathematically identical to the Radial Irving-Kirkwood formula. Novel offdiagonal elements which are important for momentum conservation emergemore » naturally from this treatment. The local pressure tensor formulas for a plane are shown to be the large radius limits of those for spherical surfaces. The radial-dependence of the pressure tensor computed by Molecular Dynamics simulation is reported for virtual spheres in a model bulk liquid where the sphere is positioned randomly or whose center is also that of a molecule in the liquid. The probability distributions of angles relating to pairs of atoms which cross the surface of the sphere, and the center of the sphere, are presented as a function of R. The variance in the shear stress calculated from the spherical Volume Averaging method is shown to converge slowly to the limiting values with increasing radius, and to be a strong function of the number of molecules in the simulation cell.« less

  16. Reduction of artifacts in computer simulation of breast Cooper's ligaments

    NASA Astrophysics Data System (ADS)

    Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.

    2016-03-01

    Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.

  17. Three-Dimensional Imaging and Numerical Reconstruction of Graphite/Epoxy Composite Microstructure Based on Ultra-High Resolution X-Ray Computed Tomography

    NASA Technical Reports Server (NTRS)

    Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.

    2014-01-01

    A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.

  18. Validating automatic semantic annotation of anatomy in DICOM CT images

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Criminisi, Antonio; Shotton, Jamie; White, Steve; Robertson, Duncan; Sparks, Bobbi; Munasinghe, Indeera; Siddiqui, Khan

    2011-03-01

    In the current health-care environment, the time available for physicians to browse patients' scans is shrinking due to the rapid increase in the sheer number of images. This is further aggravated by mounting pressure to become more productive in the face of decreasing reimbursement. Hence, there is an urgent need to deliver technology which enables faster and effortless navigation through sub-volume image visualizations. Annotating image regions with semantic labels such as those derived from the RADLEX ontology can vastly enhance image navigation and sub-volume visualization. This paper uses random regression forests for efficient, automatic detection and localization of anatomical structures within DICOM 3D CT scans. A regression forest is a collection of decision trees which are trained to achieve direct mapping from voxels to organ location and size in a single pass. This paper focuses on comparing automated labeling with expert-annotated ground-truth results on a database of 50 highly variable CT scans. Initial investigations show that regression forest derived localization errors are smaller and more robust than those achieved by state-of-the-art global registration approaches. The simplicity of the algorithm's context-rich visual features yield typical runtimes of less than 10 seconds for a 5123 voxel DICOM CT series on a single-threaded, single-core machine running multiple trees; each tree taking less than a second. Furthermore, qualitative evaluation demonstrates that using the detected organs' locations as index into the image volume improves the efficiency of the navigational workflow in all the CT studies.

  19. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Top Secret. This classification shall be applied only to information the unauthorized disclosure of... original classification authority is able to identify or describe. (2) Secret. This classification shall be...

  20. Exploring the impact of wavelet-based denoising in the classification of remote sensing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco

    2016-10-01

    The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.

  1. An Evaluation of Item Response Theory Classification Accuracy and Consistency Indices

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Hao, Shiqi

    2012-01-01

    This article introduces two new classification consistency indices that can be used when item response theory (IRT) models have been applied. The new indices are shown to be related to Rudner's classification accuracy index and Guo's classification accuracy index. The Rudner- and Guo-based classification accuracy and consistency indices are…

  2. 14 CFR 1203.412 - Classification guides.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of the classification designations (i.e., Top Secret, Secret or Confidential) apply to the identified... writing by an official with original Top Secret classification authority; the identity of the official...

  3. 48 CFR 52.247-53 - Freight Classification Description.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 2 2012-10-01 2012-10-01 false Freight Classification....247-53 Freight Classification Description. As prescribed in 47.305-9(b)(1), insert the following... modifications of previously shipped items, and different freight classifications may apply: Freight...

  4. 48 CFR 52.247-53 - Freight Classification Description.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... modifications of previously shipped items, and different freight classifications may apply: Freight... 48 Federal Acquisition Regulations System 2 2014-10-01 2014-10-01 false Freight Classification....247-53 Freight Classification Description. As prescribed in 47.305-9(b)(1), insert the following...

  5. 48 CFR 52.247-53 - Freight Classification Description.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... modifications of previously shipped items, and different freight classifications may apply: Freight... 48 Federal Acquisition Regulations System 2 2013-10-01 2013-10-01 false Freight Classification....247-53 Freight Classification Description. As prescribed in 47.305-9(b)(1), insert the following...

  6. 48 CFR 52.247-53 - Freight Classification Description.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Freight Classification....247-53 Freight Classification Description. As prescribed in 47.305-9(b)(1), insert the following... modifications of previously shipped items, and different freight classifications may apply: Freight...

  7. 48 CFR 52.247-53 - Freight Classification Description.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Freight Classification....247-53 Freight Classification Description. As prescribed in 47.305-9(b)(1), insert the following... modifications of previously shipped items, and different freight classifications may apply: Freight...

  8. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  9. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  10. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  11. 48 CFR 47.305-9 - Commodity description and freight classification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of previously shipped items, and different freight classifications may apply, the contracting officer... freight classification. 47.305-9 Section 47.305-9 Federal Acquisition Regulations System FEDERAL... Commodity description and freight classification. (a) Generally, the freight rate for supplies is based on...

  12. 46 CFR 503.55 - Derivative classification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Derivative classification. 503.55 Section 503.55... Security Program § 503.55 Derivative classification. (a) In accordance with Part 2 of Executive Order 13526... developed material consistent with the classification markings that apply to the source information, is...

  13. 46 CFR 503.55 - Derivative classification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Derivative classification. 503.55 Section 503.55... Security Program § 503.55 Derivative classification. (a) In accordance with Part 2 of Executive Order 12958... developed material consistent with the classification markings that apply to the source information, is...

  14. 49 CFR 1105.6 - Classification of actions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 8 2011-10-01 2011-10-01 false Classification of actions. 1105.6 Section 1105.6... Classification of actions. (a) Environmental Impact Statements will normally be prepared for rail construction... classifications in this section apply without regard to whether the action is proposed by application, petition...

  15. 46 CFR 503.55 - Derivative classification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 9 2014-10-01 2014-10-01 false Derivative classification. 503.55 Section 503.55... Security Program § 503.55 Derivative classification. (a) In accordance with Part 2 of Executive Order 13526... developed material consistent with the classification markings that apply to the source information, is...

  16. 46 CFR 503.55 - Derivative classification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 9 2013-10-01 2013-10-01 false Derivative classification. 503.55 Section 503.55... Security Program § 503.55 Derivative classification. (a) In accordance with Part 2 of Executive Order 13526... developed material consistent with the classification markings that apply to the source information, is...

  17. 46 CFR 503.55 - Derivative classification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 9 2012-10-01 2012-10-01 false Derivative classification. 503.55 Section 503.55... Security Program § 503.55 Derivative classification. (a) In accordance with Part 2 of Executive Order 13526... developed material consistent with the classification markings that apply to the source information, is...

  18. 49 CFR 1105.6 - Classification of actions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Classification of actions. 1105.6 Section 1105.6... Classification of actions. (a) Environmental Impact Statements will normally be prepared for rail construction... classifications in this section apply without regard to whether the action is proposed by application, petition...

  19. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... describe. (3) Confidential. This classification shall be applied only to information the unauthorized... 1312.4 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and...

  20. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... describe. (3) Confidential. This classification shall be applied only to information the unauthorized... 1312.4 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and...

  1. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... describe. (3) Confidential. This classification shall be applied only to information the unauthorized... 1312.4 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and...

  2. Nqrs Data for C3H2Cl10N2PSb[C3HCl4N2P·Cl6HSb](Subst. No. 0601)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume A `Substances Containing Ag … C10H15' of Volume 48 `Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III `Condensed Matter'. It contains an extract of Section `3.2 Data tables' of the Chapter `3 Nuclear quadrupole resonance data' providing the NQRS data for C3H2Cl10N2PSb [C3HCl4N2P·Cl6HSb] (Subst. No. 0601)

  3. Atomic Mass and Nuclear Binding Energy for U-287 (Uranium)

    NASA Astrophysics Data System (ADS)

    Sukhoruchkin, S. I.; Soroko, Z. N.

    This document is part of the Supplement containing the complete sets of data of Subvolume B `Nuclei with Z = 55 - 100' of Volume 22 `Nuclear Binding Energies and Atomic Masses' of Landolt-Börnstein - Group I `Elementary Particles, Nuclei and Atoms', and additionally including data for nuclei with Z = 101 - 130. It provides atomic mass, mass excess, nuclear binding energy, nucleon separation energies, Q-values, and nucleon residual interaction parameters for atomic nuclei of the isotope U-287 (Uranium, atomic number Z = 92, mass number A = 287).

  4. Nqrs Data for H4I3Li2NO9 [H4INO3·2(ILiO3)] (Subst. No. 2278)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for H4I3Li2NO9 [H4INO3·2(ILiO3)] (Subst. No. 2278)

  5. Nqrs Data for H6I3InO12 [I3InO9·3(H2O)] (Subst. No. 2289)

    NASA Astrophysics Data System (ADS)

    Chihara, H.; Nakamura, N.

    This document is part of Subvolume B 'Substances Containing C10H16 … Zn' of Volume 48 'Nuclear Quadrupole Resonance Spectroscopy Data' of Landolt-Börnstein - Group III 'Condensed Matter'. It contains an extract of Section '3.2 Data tables' of the Chapter '3 Nuclear quadrupole resonance data' providing the NQRS data for H6I3InO12 [I3InO9·3(H2O)] (Subst. No. 2289)

  6. Atomic Mass and Nuclear Binding Energy for Ac-212 (Actinium)

    NASA Astrophysics Data System (ADS)

    Sukhoruchkin, S. I.; Soroko, Z. N.

    This document is part of the Supplement containing the complete sets of data of Subvolume B `Nuclei with Z = 55 - 100' of Volume 22 `Nuclear Binding Energies and Atomic Masses' of Landolt-Börnstein - Group I `Elementary Particles, Nuclei and Atoms', and additionally including data for nuclei with Z = 101 - 130. It provides atomic mass, mass excess, nuclear binding energy, nucleon separation energies, Q-values, and nucleon residual interaction parameters for atomic nuclei of the isotope Ac-212 (Actinium, atomic number Z = 89, mass number A = 212).

  7. One input-class and two input-class classifications for differentiating olive oil from other edible vegetable oils by use of the normal-phase liquid chromatography fingerprint of the methyl-transesterified fraction.

    PubMed

    Jiménez-Carvelo, Ana M; Pérez-Castaño, Estefanía; González-Casado, Antonio; Cuadros-Rodríguez, Luis

    2017-04-15

    A new method for differentiation of olive oil (independently of the quality category) from other vegetable oils (canola, safflower, corn, peanut, seeds, grapeseed, palm, linseed, sesame and soybean) has been developed. The analytical procedure for chromatographic fingerprinting of the methyl-transesterified fraction of each vegetable oil, using normal-phase liquid chromatography, is described and the chemometric strategies applied and discussed. Some chemometric methods, such as k-nearest neighbours (kNN), partial least squared-discriminant analysis (PLS-DA), support vector machine classification analysis (SVM-C), and soft independent modelling of class analogies (SIMCA), were applied to build classification models. Performance of the classification was evaluated and ranked using several classification quality metrics. The discriminant analysis, based on the use of one input-class, (plus a dummy class) was applied for the first time in this study. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Automatic Segmentation of Drosophila Neural Compartments Using GAL4 Expression Data Reveals Novel Visual Pathways.

    PubMed

    Panser, Karin; Tirian, Laszlo; Schulze, Florian; Villalba, Santiago; Jefferis, Gregory S X E; Bühler, Katja; Straw, Andrew D

    2016-08-08

    Identifying distinct anatomical structures within the brain and developing genetic tools to target them are fundamental steps for understanding brain function. We hypothesize that enhancer expression patterns can be used to automatically identify functional units such as neuropils and fiber tracts. We used two recent, genome-scale Drosophila GAL4 libraries and associated confocal image datasets to segment large brain regions into smaller subvolumes. Our results (available at https://strawlab.org/braincode) support this hypothesis because regions with well-known anatomy, namely the antennal lobes and central complex, were automatically segmented into familiar compartments. The basis for the structural assignment is clustering of voxels based on patterns of enhancer expression. These initial clusters are agglomerated to make hierarchical predictions of structure. We applied the algorithm to central brain regions receiving input from the optic lobes. Based on the automated segmentation and manual validation, we can identify and provide promising driver lines for 11 previously identified and 14 novel types of visual projection neurons and their associated optic glomeruli. The same strategy can be used in other brain regions and likely other species, including vertebrates. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  9. Application toward Confocal Full-Field Microscopic X-ray Absorption Near Edge Structure Spectroscopy.

    PubMed

    Tack, Pieter; Vekemans, Bart; Laforce, Brecht; Rudloff-Grund, Jennifer; Hernández, Willinton Y; Garrevoet, Jan; Falkenberg, Gerald; Brenker, Frank; Van Der Voort, Pascal; Vincze, Laszlo

    2017-02-07

    Using X-ray absorption near edge structure (XANES) spectroscopy, information on the local chemical structure and oxidation state of an element of interest can be acquired. Conventionally, this information can be obtained in a spatially resolved manner by scanning a sample through a focused X-ray beam. Recently, full-field methods have been developed to obtain direct 2D chemical state information by imaging a large sample area. These methods are usually in transmission mode, thus restricting the use to thin and transmitting samples. Here, a fluorescence method is displayed using an energy-dispersive pnCCD detector, the SLcam, characterized by measurement times far superior to what is generally applicable. Additionally, this method operates in confocal mode, thus providing direct 3D spatially resolved chemical state information from a selected subvolume of a sample, without the need of rotating a sample. The method is applied to two samples: a gold-supported magnesia catalyst (Au/MgO) and a natural diamond containing Fe-rich inclusions. Both samples provide XANES spectra that can be overlapped with reference XANES spectra, allowing this method to be used for fingerprinting and linear combination analysis of known XANES reference compounds.

  10. Computational methods for diffusion-influenced biochemical reactions.

    PubMed

    Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G

    2007-08-01

    We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/

  11. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  12. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  13. Comparison of Pixel-Based and Object-Based Classification Using Parameters and Non-Parameters Approach for the Pattern Consistency of Multi Scale Landcover

    NASA Astrophysics Data System (ADS)

    Juniati, E.; Arrofiqoh, E. N.

    2017-09-01

    Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.

  14. 12 CFR 605.502 - Program and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... procedures. (a) The Farm Credit Administration has no authority for the original classification of... classify information. (b) Derivative classification. “Derivative classification” means the incorporating... developed material consistent with the classification markings that apply to the source information...

  15. 12 CFR 605.502 - Program and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... procedures. (a) The Farm Credit Administration has no authority for the original classification of... classify information. (b) Derivative classification. “Derivative classification” means the incorporating... developed material consistent with the classification markings that apply to the source information...

  16. 12 CFR 605.502 - Program and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... procedures. (a) The Farm Credit Administration has no authority for the original classification of... classify information. (b) Derivative classification. “Derivative classification” means the incorporating... developed material consistent with the classification markings that apply to the source information...

  17. 12 CFR 605.502 - Program and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... procedures. (a) The Farm Credit Administration has no authority for the original classification of... classify information. (b) Derivative classification. “Derivative classification” means the incorporating... developed material consistent with the classification markings that apply to the source information...

  18. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    NASA Astrophysics Data System (ADS)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  19. 15 CFR 748.7 - Applying electronically for a license or Classification request.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Applying electronically for a license or Classification request. 748.7 Section 748.7 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE EXPORT...

  20. A vegetation classification system applied to southern California

    Treesearch

    Timothy E. Paysen; Jeanine A. Derby; Hugh Black; Vernon C. Bleich; John W. Mincks

    1980-01-01

    A classification system for use in describing vegetation has been developed and is being applied to southern California. It is based upon a hierarchical stratification of vegetation, using physiognomic and taxonomic criteria. The system categories are Formation, Subformation. Series, Association, and Phase. Formations, Subformations, and Series have been specified for...

  1. Lava Morphology Classification of a Fast-Spreading Ridge Using Deep-Towed Sonar Data: East Pacific Rise

    NASA Astrophysics Data System (ADS)

    Meyer, J.; White, S.

    2005-05-01

    Classification of lava morphology on a regional scale contributes to the understanding of the distribution and extent of lava flows at a mid-ocean ridge. Seafloor classification is essential to understand the regional undersea environment at midocean ridges. In this study, the development of a classification scheme is found to identify and extract textural patterns of different lava morphologies along the East Pacific Rise using DSL-120 side-scan and ARGO camera imagery. Application of an accurate image classification technique to side-scan sonar allows us to expand upon the locally available visual ground reference data to make the first comprehensive regional maps of small-scale lava morphology present at a mid-ocean ridge. The submarine lava morphologies focused upon in this study; sheet flows, lobate flows, and pillow flows; have unique textures. Several algorithms were applied to the sonar backscatter intensity images to produce multiple textural image layers useful in distinguishing the different lava morphologies. The intensity and spatially enhanced images were then combined and applied to a hybrid classification technique. The hybrid classification involves two integrated classifiers, a rule-based expert system classifier and a machine learning classifier. The complementary capabilities of the two integrated classifiers provided a higher accuracy of regional seafloor classification compared to using either classifier alone. Once trained, the hybrid classifier can then be applied to classify neighboring images with relative ease. This classification technique has been used to map the lava morphology distribution and infer spatial variability of lava effusion rates along two segments of the East Pacific Rise, 17 deg S and 9 deg N. Future use of this technique may also be useful for attaining temporal information. Repeated documentation of morphology classification in this dynamic environment can be compared to detect regional seafloor change.

  2. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  3. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Abd-Elrahman, Amr

    2018-05-01

    Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.

  4. Looking at the ICF and human communication through the lens of classification theory.

    PubMed

    Walsh, Regina

    2011-08-01

    This paper explores the insights that classification theory can provide about the application of the International Classification of Functioning, Disability and Health (ICF) to communication. It first considers the relationship between conceptual models and classification systems, highlighting that classification systems in speech-language pathology (SLP) have not historically been based on conceptual models of human communication. It then overviews the key concepts and criteria of classification theory. Applying classification theory to the ICF and communication raises a number of issues, some previously highlighted through clinical application. Six focus questions from classification theory are used to explore these issues, and to propose the creation of an ICF-related conceptual model of communicating for the field of communication disability, which would address some of the issues raised. Developing a conceptual model of communication for SLP purposes closely articulated with the ICF would foster productive intra-professional discourse, while at the same time allow the profession to continue to use the ICF for purposes in inter-disciplinary discourse. The paper concludes by suggesting the insights of classification theory can assist professionals to apply the ICF to communication with the necessary rigour, and to work further in developing a conceptual model of human communication.

  5. A New Tool for Climatic Analysis Using the Koppen Climate Classification

    ERIC Educational Resources Information Center

    Larson, Paul R.; Lohrengel, C. Frederick, II

    2011-01-01

    The purpose of climate classification is to help make order of the seemingly endless spatial distribution of climates. The Koppen classification system in a modified format is the most widely applied system in use today. This system may not be the best nor most complete climate classification that can be conceived, but it has gained widespread…

  6. A system of vegetation classification applied to Hawaii

    Treesearch

    Michael G. Buck; Timothy E. Paysen

    1984-01-01

    A classification system for use in describing vegetation has been developed for Hawaii. Physiognomic and taxonomic criteria are used for a hierarchical stratification of vegetation in which the system categories are Formation, Subformation, Series, Association, and Phase. The System applies to local resource management activities and serves as a framework for resource...

  7. Using an Ecological Land Hierarchy to Predict Seasonal-Wetland Abundance in Upland Forests

    Treesearch

    Brian J. Palik; Richard Buech; Leanne Egeland

    2003-01-01

    Hierarchy theory, when applied to landscapes, predicts that broader-scale ecosystems constrain the development of finer-scale, nested ecosystems. This prediction finds application in hierarchical land classifications. Such classifications typically apply to physiognomically similar ecosystems, or ecological land units, e.g., a set of multi-scale forest ecosystems. We...

  8. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  9. 76 FR 76896 - International Anti-Fouling System Certificate

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-09

    ...-fouling System (IAFS) Certificate to the list of certificates a recognized classification society may..., 2001. This final rule will enable recognized classification societies to apply to the Coast Guard for... the Coast Guard to authorize recognized classification societies to issue IAFS Certificates...

  10. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  11. Landcover Classification Using Deep Fully Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, X.; Zhou, S.; Tang, J.

    2017-12-01

    Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.

  12. Three-dimensional transesophageal echocardiography: Principles and clinical applications.

    PubMed

    Vegas, Annette

    2016-10-01

    A basic understanding of evolving 3D technology enables the echocardiographer to master the new skills necessary to acquire, manipulate, and interpret 3D datasets. Single button activation of specific 3D imaging modes for both TEE and transthoracic echocardiography (TTE) matrix array probes include (a) live, (b) zoom, (c) full volume (FV), and (d) color Doppler FV. Evaluation of regional LV wall motion by RT 3D TEE is based on a change in LV chamber subvolume over time from altered segmental myocardial contractility. Unlike standard 2D TEE, there is no direct measurement of myocardial thickening or displacement of individual segments.

  13. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  14. OBJECTIVE METEOROLOGICAL CLASSIFICATION SCHEME DESIGNED TO ELUCIDATE OZONE'S DEPENDENCE ON METEOROLOGY

    EPA Science Inventory

    This paper utilizes a two-stage clustering approach as part of an objective classification scheme designed to elucidate 03's dependence on meteorology. hen applied to ten years (1981-1990) of meteorological data for Birmingham, Alabama, the classification scheme identified seven ...

  15. 46 CFR 8.240 - Application for recognition.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ALTERNATIVES Recognition of a Classification Society § 8.240 Application for recognition. (a) A classification society must apply for recognition in writing to the Commandant (CG-521). (b) An application must indicate which specific authority the classification society seeks to have delegated. (c) Upon verification from...

  16. 46 CFR 8.240 - Application for recognition.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ALTERNATIVES Recognition of a Classification Society § 8.240 Application for recognition. (a) A classification society must apply for recognition in writing to the Commandant (CG-521). (b) An application must indicate which specific authority the classification society seeks to have delegated. (c) Upon verification from...

  17. Preprocessing and meta-classification for brain-computer interfaces.

    PubMed

    Hammon, Paul S; de Sa, Virginia R

    2007-03-01

    A brain-computer interface (BCI) is a system which allows direct translation of brain states into actions, bypassing the usual muscular pathways. A BCI system works by extracting user brain signals, applying machine learning algorithms to classify the user's brain state, and performing a computer-controlled action. Our goal is to improve brain state classification. Perhaps the most obvious way to improve classification performance is the selection of an advanced learning algorithm. However, it is now well known in the BCI community that careful selection of preprocessing steps is crucial to the success of any classification scheme. Furthermore, recent work indicates that combining the output of multiple classifiers (meta-classification) leads to improved classification rates relative to single classifiers (Dornhege et al., 2004). In this paper, we develop an automated approach which systematically analyzes the relative contributions of different preprocessing and meta-classification approaches. We apply this procedure to three data sets drawn from BCI Competition 2003 (Blankertz et al., 2004) and BCI Competition III (Blankertz et al., 2006), each of which exhibit very different characteristics. Our final classification results compare favorably with those from past BCI competitions. Additionally, we analyze the relative contributions of individual preprocessing and meta-classification choices and discuss which types of BCI data benefit most from specific algorithms.

  18. 40 CFR 51.902 - Which classification and nonattainment area planning provisions of the CAA shall apply to areas...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... area planning provisions of the CAA shall apply to areas designated nonattainment for the 8-hour NAAQS? 51.902 Section 51.902 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.902 Which classification and...

  19. 40 CFR 51.902 - Which classification and nonattainment area planning provisions of the CAA shall apply to areas...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... area planning provisions of the CAA shall apply to areas designated nonattainment for the 1997 8-hour NAAQS? 51.902 Section 51.902 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.902 Which classification and...

  20. 40 CFR 51.902 - Which classification and nonattainment area planning provisions of the CAA shall apply to areas...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... area planning provisions of the CAA shall apply to areas designated nonattainment for the 1997 8-hour NAAQS? 51.902 Section 51.902 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.902 Which classification and...

  1. 40 CFR 51.902 - Which classification and nonattainment area planning provisions of the CAA shall apply to areas...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... area planning provisions of the CAA shall apply to areas designated nonattainment for the 1997 8-hour NAAQS? 51.902 Section 51.902 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.902 Which classification and...

  2. 40 CFR 51.902 - Which classification and nonattainment area planning provisions of the CAA shall apply to areas...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... area planning provisions of the CAA shall apply to areas designated nonattainment for the 8-hour NAAQS? 51.902 Section 51.902 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.902 Which classification and...

  3. Classification of circulation type sequences applied to snow avalanches over the eastern Pyrenees (Andorra and Catalonia)

    NASA Astrophysics Data System (ADS)

    Esteban, Pere; Beck, Christoph; Philipp, Andreas

    2010-05-01

    Using data associated with accidents or damages caused by snow avalanches over the eastern Pyrenees (Andorra and Catalonia) several atmospheric circulation type catalogues have been obtained. For this purpose, different circulation type classification methods based on Principal Component Analysis (T-mode and S-mode using the extreme scores) and on optimization procedures (Improved K-means and SANDRA) were applied . Considering the characteristics of the phenomena studied, not only single day circulation patterns were taken into account but also sequences of circulation types of varying length. Thus different classifications with different numbers of types and for different sequence lengths were obtained using the different classification methods. Simple between type variability, within type variability, and outlier detection procedures have been applied for selecting the best result concerning snow avalanches type classifications. Furthermore, days without occurrence of the hazards were also related to the avalanche centroids using pattern-correlations, facilitating the calculation of the anomalies between hazardous and no hazardous days, and also frequencies of occurrence of hazardous events for each circulation type. Finally, the catalogues statistically considered the best results are evaluated using the avalanche forecaster expert knowledge. Consistent explanation of snow avalanches occurrence by means of circulation sequences is obtained, but always considering results from classifications with different sequence length. This work has been developed in the framework of the COST Action 733 (Harmonisation and Applications of Weather Type Classifications for European regions).

  4. The Use and Abuse of Diagnostic/Classification Criteria

    PubMed Central

    June, Rayford R.; Aggarwal, Rohit

    2015-01-01

    In rheumatic diseases, classification criteria have been developed to identify well-defined homogenous cohorts for clinical research. Although, they are commonly used in clinical practice, their use may not be appropriate for routine diagnostic clinical care. Classification criteria are being revised with improved methodology and further understanding of disease pathophysiology, but still may not encompass all unique clinical situations to be applied for diagnosis of heterogeneous, rare, evolving rheumatic diseases. Diagnostic criteria development is challenging primarily due to difficulty for universal application given significant differences in prevalence of rheumatic diseases based on geographical area and clinic settings. Despite these shortcomings, the clinician can still use classification criteria for understanding the disease as well as a guide for diagnosis with a few caveats. We present the limits of current classification criteria, describe their use and abuse in clinical practice, and how they should be used with caution when applied in clinics. PMID:26096094

  5. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  6. A mutual information-Dempster-Shafer based decision ensemble system for land cover classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pahlavani, Parham; Bigdeli, Behnaz

    2017-12-01

    Hyperspectral images contain extremely rich spectral information that offer great potential to discriminate between various land cover classes. However, these images are usually composed of tens or hundreds of spectrally close bands, which result in high redundancy and great amount of computation time in hyperspectral classification. Furthermore, in the presence of mixed coverage pixels, crisp classifiers produced errors, omission and commission. This paper presents a mutual information-Dempster-Shafer system through an ensemble classification approach for classification of hyperspectral data. First, mutual information is applied to split data into a few independent partitions to overcome high dimensionality. Then, a fuzzy maximum likelihood classifies each band subset. Finally, Dempster-Shafer is applied to fuse the results of the fuzzy classifiers. In order to assess the proposed method, a crisp ensemble system based on a support vector machine as the crisp classifier and weighted majority voting as the crisp fusion method are applied on hyperspectral data. Furthermore, a dimension reduction system is utilized to assess the effectiveness of mutual information band splitting of the proposed method. The proposed methodology provides interesting conclusions on the effectiveness and potentiality of mutual information-Dempster-Shafer based classification of hyperspectral data.

  7. 78 FR 66813 - Visas: Regulatory Exception to Permit Compliance With the United Nations Headquarters Agreement...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-06

    ... Visa Classifications; Final Rule #0;#0;Federal Register / Vol. 78, No. 215 / Wednesday, November 6... Family'' for Certain Nonimmigrant Visa Classifications AGENCY: Department of State. ACTION: Final rule... classifications and also applies to foreign government officials who may be admitted in immediate and continuous...

  8. Multivariate classification of infrared spectra of cell and tissue samples

    DOEpatents

    Haaland, David M.; Jones, Howland D. T.; Thomas, Edward V.

    1997-01-01

    Multivariate classification techniques are applied to spectra from cell and tissue samples irradiated with infrared radiation to determine if the samples are normal or abnormal (cancerous). Mid and near infrared radiation can be used for in vivo and in vitro classifications using at least different wavelengths.

  9. 8 CFR 248.1 - Eligibility.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS CHANGE OF NONIMMIGRANT CLASSIFICATION... apply to have his or her nonimmigrant classification changed to any nonimmigrant classification other.... 1101(a)(15)(C). An alien defined by section 101(a)(15)(V), or 101(a)(15)(U) of the Act, 8 U.S.C. 1101(a...

  10. 8 CFR 248.1 - Eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS CHANGE OF NONIMMIGRANT CLASSIFICATION... apply to have his or her nonimmigrant classification changed to any nonimmigrant classification other.... 1101(a)(15)(C). An alien defined by section 101(a)(15)(V), or 101(a)(15)(U) of the Act, 8 U.S.C. 1101(a...

  11. 46 CFR 8.410 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Compliance Program § 8.410 Applicability. This subpart applies to: (a) Recognized classification societies... recognized classification society that is authorized by the Coast Guard to participate in the Alternate...

  12. 46 CFR 8.410 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Compliance Program § 8.410 Applicability. This subpart applies to: (a) Recognized classification societies... recognized classification society that is authorized by the Coast Guard to participate in the Alternate...

  13. Autoradiographic Distribution and Applied Pharmacological Characteristics of Dextromethorphan and Related Antitissue/Anticonvulsant Drugs and Novel Analogs

    DTIC Science & Technology

    1993-10-01

    AD-A273 247 AD____ CONTRACT NO: DAMD17-90-C-0124 TITLE: AUTORADIOGRAPHIC DISTRIBUTION AND APPLIED PHARMACOLOGICAL CHARACTERISTICS OF DEXTROMETHORPHAN ...Anticonvulsants, Antitissue, Dextromethorphan , Autoradiography, Pharmacokinetics 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION...middle cerebral artery occlusion model with dextromethorphan , carbetapentane and three of the carbetapentane analogues, 11, B and D, which were

  14. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    NASA Technical Reports Server (NTRS)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  15. Quantitative T2 mapping of recurrent glioblastoma under bevacizumab improves monitoring for non-enhancing tumor progression and predicts overall survival

    PubMed Central

    Hattingen, Elke; Jurcoane, Alina; Daneshvar, Keivan; Pilatus, Ulrich; Mittelbronn, Michel; Steinbach, Joachim P.; Bähr, Oliver

    2013-01-01

    Background Anti-angiogenic treatment in recurrent glioblastoma patients suppresses contrast enhancement and reduces vasogenic edema while non-enhancing tumor progression is common. Thus, the importance of T2-weighted imaging is increasing. We therefore quantified T2 relaxation times, which are the basis for the image contrast on T2-weighted images. Methods Conventional and quantitative MRI procedures were performed on 18 patients with recurrent glioblastoma before treatment with bevacizumab and every 8 weeks thereafter until further tumor progression. We segmented the tumor on conventional MRI into 3 subvolumes: enhancing tumor, non-enhancing tumor, and edema. Using coregistered quantitative maps, we followed changes in T2 relaxation time in each subvolume. Moreover, we generated differential T2 maps by a voxelwise subtraction using the first T2 map under bevacizumab as reference. Results Visually segmented areas of tumor and edema did not differ in T2 relaxation times. Non-enhancing tumor volume did not decrease after commencement of bevacizumab treatment but strikingly increased at progression. Differential T2 maps clearly showed non-enhancing tumor progression in previously normal brain. T2 relaxation times decreased under bevacizumab without re-increasing at tumor progression. A decrease of <26 ms in the enhancing tumor following exposure to bevacizumab was associated with longer overall survival. Conclusions Combining quantitative MRI and tumor segmentation improves monitoring of glioblastoma patients under bevacizumab. The degree of change in T2 relaxation time under bevacizumab may be an early response parameter predictive of overall survival. The sustained decrease in T2 relaxation times toward values of healthy tissue masks progressive tumor on conventional T2-weighted images. Therefore, quantitative T2 relaxation times may detect non-enhancing progression better than conventional T2-weighted imaging. PMID:23925453

  16. Patterns-of-failure guided biological target volume definition for head and neck cancer patients: FDG-PET and dosimetric analysis of dose escalation candidate subregions.

    PubMed

    Mohamed, Abdallah S R; Cardenas, Carlos E; Garden, Adam S; Awan, Musaddiq J; Rock, Crosby D; Westergaard, Sarah A; Brandon Gunn, G; Belal, Abdelaziz M; El-Gowily, Ahmed G; Lai, Stephen Y; Rosenthal, David I; Fuller, Clifton D; Aristophanous, Michalis

    2017-08-01

    To identify the radio-resistant subvolumes in pretreatment FDG-PET by mapping the spatial location of the origin of tumor recurrence after IMRT for head-and-neck squamous cell cancer to the pretreatment FDG-PET/CT. Patients with local/regional recurrence after IMRT with available FDG-PET/CT and post-failure CT were included. For each patient, both pre-therapy PET/CT and recurrence CT were co-registered with the planning CT (pCT). A 4-mm radius was added to the centroid of mapped recurrence growth target volumes (rGTV's) to create recurrence nidus-volumes (NVs). The overlap between boost-tumor-volumes (BTV) representing different SUV thresholds/margins combinations and NVs was measured. Forty-seven patients were eligible. Forty-two (89.4%) had type A central high dose failure. Twenty-six (48%) of type A rGTVs were at the primary site and 28 (52%) were at the nodal site. The mean dose of type A rGTVs was 71Gy. BTV consisting of 50% of the maximum SUV plus 10mm margin was the best subvolume for dose boosting due to high coverage of primary site NVs (92.3%), low average relative volume to CTV1 (41%), and least average percent voxels outside CTV1 (19%). The majority of loco-regional recurrences originate in the regions of central-high-dose. When correlated with pretreatment FDG-PET, the majority of recurrences originated in an area that would be covered by additional 10mm margin on the volume of 50% of the maximum FDG uptake. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. fMRI brain mapping during motion capture and FES induced motor tasks: signal to noise ratio assessment.

    PubMed

    Gandolla, Marta; Ferrante, Simona; Casellato, Claudia; Ferrigno, Giancarlo; Molteni, Franco; Martegani, Alberto; Frattini, Tiziano; Pedrocchi, Alessandra

    2011-10-01

    Functional Electrical Stimulation (FES) is a well known clinical rehabilitation procedure, however the neural mechanisms that underlie this treatment at Central Nervous System (CNS) level are still not completely understood. Functional magnetic resonance imaging (fMRI) is a suitable tool to investigate effects of rehabilitative treatments on brain plasticity. Moreover, monitoring the effective executed movement is needed to correctly interpret activation maps, most of all in neurological patients where required motor tasks could be only partially accomplished. The proposed experimental set-up includes a 1.5 T fMRI scanner, a motion capture system to acquire kinematic data, and an electro-stimulation device. The introduction of metallic devices and of stimulation current in the MRI room could affect fMRI acquisitions so as to prevent a reliable activation maps analysis. What we are interested in is that the Blood Oxygenation Level Dependent (BOLD) signal, marker of neural activity, could be detected within a given experimental condition and set-up. In this paper we assess temporal Signal to Noise Ratio (SNR) as image quality index. BOLD signal change is about 1-2% as revealed by a 1.5 T scanner. This work demonstrates that, with this innovative set-up, in the main cortical sensorimotor regions 1% BOLD signal change can be detected at least in the 93% of the sub-volumes, and almost 100% of the sub-volumes are suitable for 2% signal change detection. The integrated experimental set-up will therefore allows to detect FES induced movements fMRI maps simultaneously with kinematic acquisitions so as to investigate FES-based rehabilitation treatments contribution at CNS level. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Lung vessel segmentation in CT images using graph-cuts

    NASA Astrophysics Data System (ADS)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  19. Supercluster simulations: impact of baryons on the matter power spectrum and weak lensing forecasts for Super-CLASS

    NASA Astrophysics Data System (ADS)

    Peters, Aaron; Brown, Michael L.; Kay, Scott T.; Barnes, David J.

    2018-03-01

    We use a combination of full hydrodynamic and dark matter only simulations to investigate the effect that supercluster environments and baryonic physics have on the matter power spectrum, by re-simulating a sample of supercluster sub-volumes. On large scales we find that the matter power spectrum measured from our supercluster sample has at least twice as much power as that measured from our random sample. Our investigation of the effect of baryonic physics on the matter power spectrum is found to be in agreement with previous studies and is weaker than the selection effect over the majority of scales. In addition, we investigate the effect of targeting a cosmologically non-representative, supercluster region of the sky on the weak lensing shear power spectrum. We do this by generating shear and convergence maps using a line-of-sight integration technique, which intercepts our random and supercluster sub-volumes. We find the convergence power spectrum measured from our supercluster sample has a larger amplitude than that measured from the random sample at all scales. We frame our results within the context of the Super-CLuster Assisted Shear Survey (Super-CLASS), which aims to measure the cosmic shear signal in the radio band by targeting a region of the sky that contains five Abell clusters. Assuming the Super-CLASS survey will have a source density of 1.5 galaxies arcmin-2, we forecast a detection significance of 2.7^{+1.5}_{-1.2}, which indicates that in the absence of systematics the Super-CLASS project could make a cosmic shear detection with radio data alone.

  20. Musculotendon and fascicle strains in anterior and posterior neck muscles during whiplash injury.

    PubMed

    Vasavada, Anita N; Brault, John R; Siegmund, Gunter P

    2007-04-01

    A biomechanical neck model combined with subject-specific kinematic and electromyographic data were used to calculate neck muscle strains during whiplash. To calculate the musculotendon and fascicle strains during whiplash and to compare these strains to published muscle injury thresholds. Previous work has shown potentially injurious musculotendon strains in sternocleidomastoid (SCM) during whiplash, but neither the musculotendon strains in posterior cervical muscles nor the fascicle strains in either muscle group have been examined. Experimental human subject data from rear-end automobile impacts were integrated with a biomechanical model of the neck musculoskeletal system. Subject-specific head kinematic data were imposed on the model, and neck musculotendon and fascicle strains and strain rates were computed. Electromyographic data from the sternocleidomastoid and the posterior cervical muscles were compared with strain data to determine which muscles were being eccentrically contracted. SCM experienced lengthening during the retraction phase of head/neck kinematics, whereas the posterior muscles (splenius capitis [SPL], semispinalis capitis [SEMI], and trapezius [TRAP]) lengthened during the rebound phase. Peak SCM fascicle lengthening strains averaged (+/-SD) 4% (+/-3%) for the subvolumes attached to the mastoid process and 7% (+/-5%) for the subvolume attached to the occiput. Posteriorly, peak fascicle strains were 21% (+/-14%) for SPL, 18% (+/-16%) for SEMI, and 5% (+/-4%) for TRAP, with SPL strains significantly greater than calculated in SCM or TRAP. Fascicle strains were, on average, 1.2 to 2.3 times greater than musculotendon strains. SCM and posterior muscle activity occurred during intervals of muscle fascicle lengthening. The cervical muscle strains induced during a rear-end impact exceed the previously-reported injury threshold for a single stretch of active muscle. Further, the larger strains experienced by extensor muscles are consistent with clinical reports of pain primarily in the posterior cervical region following rear-end impacts.

  1. WE-G-BRE-03: Dose Painting by Numbers Using Targeted Gold Nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altundal, Y; Sajo, E; Korideck, H

    Purpose: Homogeneous dose enhancement in tumor cells of lung cancer patients treated with conventional dose of 60–66 Gy in five fractions is limited due to increased risk of toxicity to normal structures. Dose painting by numbers (DPBN) is the prescription of a non-uniform radiation dose distribution in the tumor for each voxel based on the intensity level of that voxel obtained from the tumor image. The purpose of this study is to show that DPBN using targeted gold nanoparticles (GNPs) could enhance conventional doses in the more resistant tumor areas. Methods: Cone beam computed tomography (CBCT) images of GNPs aftermore » intratumoral injection into human tumor were taken at 0, 48, 144 and 160 hours. The dose enhancement in the tumor voxels by secondary electrons from the GNPs was calculated based on analytical microdosimetry methods. The dose enhancement factor (DEF) is the ratio of the doses to the tumor with and without the presence of GNPs. The DEF was calculated for each voxel of the images based on the GNP concentration in the tumor sub-volumes using 6-MV photon spectra obtained using Monte Carlo simulations at 5 cm depth (10×10 cm2 field). Results: The results revealed DEF values of 1.05–2.38 for GNPs concentrations of 1–30 mg/g which corresponds to 12.60 – 28.56 Gy per fraction for delivering 12 Gy per fraction homogenously to lung tumor region. Conclusion: Our preliminary results verify that DPBN could be achieved using GNPs to enhance conventional doses to high risk tumor sub-volumes. In practice, DPBN using GNPs could be achieved due to diffusion of targeted GNPs sustainably released in-situ from radiotherapy biomaterials (e.g. fiducials) coated with polymer film containing the GNPs.« less

  2. Heart dosimetric analysis of three types of cardiac toxicity in patients treated on dose-escalation trials for Stage III non-small-cell lung cancer.

    PubMed

    Wang, Kyle; Pearlstein, Kevin A; Patchett, Nicholas D; Deal, Allison M; Mavroidis, Panayiotis; Jensen, Brian C; Lipner, Matthew B; Zagar, Timothy M; Wang, Yue; Lee, Carrie B; Eblan, Michael J; Rosenman, Julian G; Socinski, Mark A; Stinchcombe, Thomas E; Marks, Lawrence B

    2017-11-01

    To assess associations between radiation dose/volume parameters for cardiac subvolumes and different types of cardiac events in patients treated on radiation dose-escalation trials. Patients with Stage III non-small-cell lung cancer received dose-escalated radiation (median 74 Gy) using 3D-conformal radiotherapy on six prospective trials from 1996 to 2009. Volumes analyzed included whole heart, left ventricle (LV), right atrium (RA), and left atrium (LA). Cardiac events were divided into three categories: pericardial (symptomatic effusion and pericarditis), ischemia (myocardial infarction and unstable angina), and arrhythmia. Univariable competing risks analysis was used. 112 patients were analyzed, with median follow-up 8.8 years for surviving patients. Nine patients had pericardial, seven patients had ischemic, and 12 patients had arrhythmic events. Pericardial events were correlated with whole heart, RA, and LA dose (eg, heart-V30 [p=0.024], RA-V30 [p=0.013], and LA-V30 [p=0.001]), but not LV dose. Ischemic events were correlated with LV and whole heart dose (eg, LV-V30 [p=0.012], heart-V30 [p=0.048]). Arrhythmic events showed borderline significant associations with RA, LA, and whole heart dose (eg, RA-V30 [p=0.082], LA-V30 [p=0.076], heart-V30 [p=0.051]). Cardiac events were associated with decreased survival on univariable analysis (p=0.008, HR 2.09), but only disease progression predicted for decreased survival on multivariable analysis. Cardiac events were heterogeneous and associated with distinct heart subvolume doses. These data support the hypothesis of distinct etiologies for different types of radiation-associated cardiotoxicity. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Preferential flow occurs in unsaturated conditions

    USGS Publications Warehouse

    Nimmo, John R.

    2012-01-01

    Because it commonly generates high-speed, high-volume flow with minimal exposure to solid earth materials, preferential flow in the unsaturated zone is a dominant influence in many problems of infiltration, recharge, contaminant transport, and ecohydrology. By definition, preferential flow occurs in a portion of a medium – that is, a preferred part, whether a pathway, pore, or macroscopic subvolume. There are many possible classification schemes, but usual consideration of preferential flow includes macropore or fracture flow, funneled flow determined by macroscale heterogeneities, and fingered flow determined by hydraulic instability rather than intrinsic heterogeneity. That preferential flow is spatially concentrated associates it with other characteristics that are typical, although not defining: it tends to be unusually fast, to transport high fluxes, and to occur with hydraulic disequilibrium within the medium. It also has a tendency to occur in association with large conduits and high water content, although these are less universal than is commonly assumed. Predictive unsaturated-zone flow models in common use employ several different criteria for when and where preferential flow occurs, almost always requiring a nearly saturated medium. A threshold to be exceeded may be specified in terms of the following (i) water content; (ii) matric potential, typically a value high enough to cause capillary filling in a macropore of minimum size; (iii) infiltration capacity or other indication of incipient surface ponding; or (iv) other conditions related to total filling of certain pores. Yet preferential flow does occur without meeting these criteria. My purpose in this commentary is to point out important exceptions and implications of ignoring them. Some of these pertain mainly to macropore flow, others to fingered or funneled flow, and others to combined or undifferentiated flow modes.

  4. Remote Sensing Image Classification Applied to the First National Geographical Information Census of China

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Wen, Zongyong; Zhu, Zhaorong; Xia, Qiang; Shun, Lan

    2016-06-01

    Image classification will still be a long way in the future, although it has gone almost half a century. In fact, researchers have gained many fruits in the image classification domain, but there is still a long distance between theory and practice. However, some new methods in the artificial intelligence domain will be absorbed into the image classification domain and draw on the strength of each to offset the weakness of the other, which will open up a new prospect. Usually, networks play the role of a high-level language, as is seen in Artificial Intelligence and statistics, because networks are used to build complex model from simple components. These years, Bayesian Networks, one of probabilistic networks, are a powerful data mining technique for handling uncertainty in complex domains. In this paper, we apply Tree Augmented Naive Bayesian Networks (TAN) to texture classification of High-resolution remote sensing images and put up a new method to construct the network topology structure in terms of training accuracy based on the training samples. Since 2013, China government has started the first national geographical information census project, which mainly interprets geographical information based on high-resolution remote sensing images. Therefore, this paper tries to apply Bayesian network to remote sensing image classification, in order to improve image interpretation in the first national geographical information census project. In the experiment, we choose some remote sensing images in Beijing. Experimental results demonstrate TAN outperform than Naive Bayesian Classifier (NBC) and Maximum Likelihood Classification Method (MLC) in the overall classification accuracy. In addition, the proposed method can reduce the workload of field workers and improve the work efficiency. Although it is time consuming, it will be an attractive and effective method for assisting office operation of image interpretation.

  5. Discrimination of different sub-basins on Tajo River based on water influence factor

    NASA Astrophysics Data System (ADS)

    Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.

    2009-04-01

    Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.

  6. Cryo-Electron Tomography for Structural Characterization of Macromolecular Complexes

    PubMed Central

    Cope, Julia; Heumann, John; Hoenger, Andreas

    2011-01-01

    Cryo-electron tomography (cryo-ET) is an emerging 3-D reconstruction technology that combines the principles of tomographic 3-D reconstruction with the unmatched structural preservation of biological material embedded in vitreous ice. Cryo-ET is particularly suited to investigating cell-biological samples and large macromolecular structures that are too polymorphic to be reconstructed by classical averaging-based 3-D reconstruction procedures. This unit aims to make cryo-ET accessible to newcomers and discusses the specialized equipment required, as well as the relevant advantages and hurdles associated with sample preparation by vitrification and cryo-ET. Protocols describe specimen preparation, data recording and 3-D data reconstruction for cryo-ET, with a special focus on macromolecular complexes. A step-by-step procedure for specimen vitrification by plunge freezing is provided, followed by the general practicalities of tilt-series acquisition for cryo-ET, including advice on how to select an area appropriate for acquiring a tilt series. A brief introduction to the underlying computational reconstruction principles applied in tomography is described, along with instructions for reconstructing a tomogram from cryo-tilt series data. Finally, a method is detailed for extracting small subvolumes containing identical macromolecular structures from tomograms for alignment and averaging as a means to increase the signal-to-noise ratio and eliminate missing wedge effects inherent in tomographic reconstructions. PMID:21842467

  7. 2pBAb5. Validation of three-dimensional strain tracking by volumetric ultrasound image correlation in a pubovisceral muscle model

    PubMed Central

    Nagle, Anna S.; Nageswaren, Ashok R.; Haridas, Balakrishna; Mast, T. D.

    2014-01-01

    Little is understood about the biomechanical changes leading to pelvic floor disorders such as stress urinary incontinence. In order to measure regional biomechanical properties of the pelvic floor muscles in vivo, a three dimensional (3D) strain tracking technique employing correlation of volumetric ultrasound images has been implemented. In this technique, local 3D displacements are determined as a function of applied stress and then converted to strain maps. To validate this approach, an in vitro model of the pubovisceral muscle, with a hemispherical indenter emulating the downward stress caused by intra-abdominal pressure, was constructed. Volumetric B-scan images were recorded as a function of indenter displacement while muscle strain was measured independently by a sonomicrometry system (Sonometrics). Local strains were computed by ultrasound image correlation and compared with sonomicrometry-measured strains to assess strain tracking accuracy. Image correlation by maximizing an exponential likelihood function was found more reliable than the Pearson correlation coefficient. Strain accuracy was dependent on sizes of the subvolumes used for image correlation, relative to characteristic speckle length scales of the images. Decorrelation of echo signals was mapped as a function of indenter displacement and local tissue orientation. Strain measurement accuracy was weakly related to local echo decorrelation. PMID:24900165

  8. A Tissue Relevance and Meshing Method for Computing Patient-Specific Anatomical Models in Endoscopic Sinus Surgery Simulation

    NASA Astrophysics Data System (ADS)

    Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.

    This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.

  9. 3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Hacihaliloglu, Ilker; Abugharbieh, Rafeef

    2010-03-01

    Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster than volumetric registration (~4.5 seconds versus 83 seconds).

  10. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  11. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  12. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  13. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer

    PubMed Central

    Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results. PMID:28467468

  14. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    PubMed

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  15. A comparison of unsupervised classification procedures on LANDSAT MSS data for an area of complex surface conditions in Basilicata, Southern Italy

    NASA Technical Reports Server (NTRS)

    Justice, C.; Townshend, J. (Principal Investigator)

    1981-01-01

    Two unsupervised classification procedures were applied to ratioed and unratioed LANDSAT multispectral scanner data of an area of spatially complex vegetation and terrain. An objective accuracy assessment was undertaken on each classification and comparison was made of the classification accuracies. The two unsupervised procedures use the same clustering algorithm. By on procedure the entire area is clustered and by the other a representative sample of the area is clustered and the resulting statistics are extrapolated to the remaining area using a maximum likelihood classifier. Explanation is given of the major steps in the classification procedures including image preprocessing; classification; interpretation of cluster classes; and accuracy assessment. Of the four classifications undertaken, the monocluster block approach on the unratioed data gave the highest accuracy of 80% for five coarse cover classes. This accuracy was increased to 84% by applying a 3 x 3 contextual filter to the classified image. A detailed description and partial explanation is provided for the major misclassification. The classification of the unratioed data produced higher percentage accuracies than for the ratioed data and the monocluster block approach gave higher accuracies than clustering the entire area. The moncluster block approach was additionally the most economical in terms of computing time.

  16. Cascaded deep decision networks for classification of endoscopic images

    NASA Astrophysics Data System (ADS)

    Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin

    2017-02-01

    Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.

  17. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  18. Analysis of the Carnegie Classification of Community Engagement: Patterns and Impact on Institutions

    ERIC Educational Resources Information Center

    Driscoll, Amy

    2014-01-01

    This chapter describes the impact that participation in the Carnegie Classification for Community Engagement had on the institutions of higher learning that applied for the classification. This is described in terms of changes in direct community engagement, monitoring and reporting on community engagement, and levels of student and professor…

  19. Forest ecosystems of a Lower Gulf Coastal Plainlandscape: multifactor classification and analysis

    Treesearch

    P. Charles Goebel; Brian J. Palik; L. Katherine Kirkman; Mark B. Drew; Larry West; Dee C. Pederson

    2001-01-01

    The most common forestland classification techniques applied in the southeastern United States are vegetation-based. While not completely ignored, the application of multifactor, hierarchical ecosystem classifications are limited despite their widespread use in other regions of the eastern United States. We present one of the few truly integrated ecosystem...

  20. Application of a 5-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants in the InSiGHT locus-specific database.

    PubMed

    Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul; Greenblatt, Marc S; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P; Farrington, Susan M; Frayling, Ian M; Frebourg, Thierry; Goldgar, David E; Heinen, Christopher D; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J; Sijmons, Rolf; Tavtigian, Sean V; Tops, Carli M; Weber, Thomas; Wijnen, Juul; Woods, Michael O; Macrae, Finlay; Genuardi, Maurizio

    2014-02-01

    The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary expert committee review of the clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation and classification of variants in public locus-specific databases.

  1. Application of a five-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants lodged on the InSiGHT locus-specific database

    PubMed Central

    Plazzer, John-Paul; Greenblatt, Marc S.; Akagi, Kiwamu; Al-Mulla, Fahd; Bapat, Bharati; Bernstein, Inge; Capellá, Gabriel; den Dunnen, Johan T.; du Sart, Desiree; Fabre, Aurelie; Farrell, Michael P.; Farrington, Susan M.; Frayling, Ian M.; Frebourg, Thierry; Goldgar, David E.; Heinen, Christopher D.; Holinski-Feder, Elke; Kohonen-Corish, Maija; Robinson, Kristina Lagerstedt; Leung, Suet Yi; Martins, Alexandra; Moller, Pal; Morak, Monika; Nystrom, Minna; Peltomaki, Paivi; Pineda, Marta; Qi, Ming; Ramesar, Rajkumar; Rasmussen, Lene Juel; Royer-Pokora, Brigitte; Scott, Rodney J.; Sijmons, Rolf; Tavtigian, Sean V.; Tops, Carli M.; Weber, Thomas; Wijnen, Juul; Woods, Michael O.; Macrae, Finlay; Genuardi, Maurizio

    2015-01-01

    Clinical classification of sequence variants identified in hereditary disease genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test and apply a standardized classification scheme to constitutional variants in the Lynch Syndrome genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist variant classification, and recognized by microattribution. The scheme was refined by multidisciplinary expert committee review of clinical and functional data available for variants, applied to 2,360 sequence alterations, and disseminated online. Assessment using validated criteria altered classifications for 66% of 12,006 database entries. Clinical recommendations based on transparent evaluation are now possible for 1,370 variants not obviously protein-truncating from nomenclature. This large-scale endeavor will facilitate consistent management of suspected Lynch Syndrome families, and demonstrates the value of multidisciplinary collaboration for curation and classification of variants in public locus-specific databases. PMID:24362816

  2. An approach for combining airborne LiDAR and high-resolution aerial color imagery using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Liu, Yansong; Monteiro, Sildomar T.; Saber, Eli

    2015-10-01

    Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.

  3. Vapor-Liquid Equilibrium in the Mixture 1,1-Difluoroethane C2H4F2 + C4H8 2-Methylpropene (EVLM1131, LB5730_E)

    NASA Astrophysics Data System (ADS)

    Cibulka, I.; Fontaine, J.-C.; Sosnkowska-Kehiaian, K.; Kehiaian, H. V.

    This document is part of Subvolume A 'Binary Liquid Systems of Nonelectrolytes I' of Volume 26 'Heats of Mixing, Vapor-Liquid Equilibrium, and Volumetric Properties of Mixtures and Solutions' of Landolt-Börnstein Group IV 'Physical Chemistry'. It contains the Chapter 'Vapor-Liquid Equilibrium in the Mixture 1,1-Difluoroethane C2H4F2 + C4H8 2-Methylpropene (EVLM1131, LB5730_E)' providing data from direct measurement of pressure and mole fraction in vapor phase at variable mole fraction in liquid phase and constant temperature.

  4. Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness

    NASA Technical Reports Server (NTRS)

    Rossbacher, Lisa A.

    1987-01-01

    One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.

  5. A contour-based shape descriptor for biomedical image classification and retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.

  6. [The informational support of statistical observation related to children disability].

    PubMed

    Son, I M; Polikarpov, A V; Ogrizko, E V; Golubeva, T Yu

    2016-01-01

    Within the framework of the Convention on rights of the disabled the revision is specified concerning criteria of identification of disability of children and reformation of system of medical social expertise according international standards of indices of health and indices related to health. In connection with it, it is important to consider the relationship between alterations in forms of the Federal statistical monitoring in the part of registration of disabled children in the Russian Federation and classification of health indices and indices related to health applied at identification of disability. The article presents analysis of relationship between alterations in forms of the Federal statistical monitoring in the part of registration of disabled children in the Russian Federation and applied classifications used at identification of disability (International classification of impairments, disabilities and handicap (ICDH), international classification of functioning, disability and health (ICF), international classification of functioning, disability and health, version for children and youth (ICF-CY). The intersectorial interaction is considered within the framework of statistics of children disability.

  7. Application of Sal classification to parotid gland fine-needle aspiration cytology: 10-year retrospective analysis of 312 patients.

    PubMed

    Kilavuz, Ahmet Erdem; Songu, Murat; İmre, Abdulkadir; Arslanoğlu, Secil; Özkul, Yilmaz; Pinar, Ercan; Ateş, Düzgün

    2018-05-01

    The accuracy of fine-needle aspiration biopsy (FNAB) is controversial in parotid tumors. We aimed to compare FNAB results with the final histopathological diagnosis and to apply the "Sal classification" to our data and discuss its results and its place in parotid gland cytology. The FNAB cytological findings and final histological diagnosis were assessed retrospectively in 2 different scenarios based on the distribution of nondefinitive cytology, and we applied the Sal classification and determined malignancy rate, sensitivity, and specificity for each category. In 2 different scenarios FNAB sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were found to be 81%, 87%, 54.7%, and 96.1%; and 65.3%, 100%, 100%, and 96.1%, respectively. The malignancy rates and sensitivity and specificity were also calculated and discussed for each Sal category. We believe that the Sal classification has a great potential to be a useful tool in classification of parotid gland cytology. © 2018 Wiley Periodicals, Inc.

  8. A new tool for post-AGB SED classification

    NASA Astrophysics Data System (ADS)

    Bendjoya, P.; Suarez, O.; Galluccio, L.; Michel, O.

    We present the results of an unsupervised classification method applied on a set of 344 spectral energy distributions (SED) of post-AGB stars extracted from the Torun catalogue of Galactic post-AGB stars. This method aims to find a new unbiased method for post-AGB star classification based on the information contained in the IR region of the SED (fluxes, IR excess, colours). We used the data from IRAS and MSX satellites, and from the 2MASS survey. We applied a classification method based on the construction of the dataset of a minimal spanning tree (MST) with the Prim's algorithm. In order to build this tree, different metrics have been tested on both flux and color indices. Our method is able to classify the set of 344 post-AGB stars in 9 distinct groups according to their SEDs.

  9. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  10. Multi-National Banknote Classification Based on Visible-light Line Sensor and Convolutional Neural Network.

    PubMed

    Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung

    2017-07-08

    Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.

  11. Multi-National Banknote Classification Based on Visible-light Line Sensor and Convolutional Neural Network

    PubMed Central

    Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung

    2017-01-01

    Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466

  12. Reliability analysis of the AOSpine thoracolumbar spine injury classification system by a worldwide group of naïve spinal surgeons.

    PubMed

    Kepler, Christopher K; Vaccaro, Alexander R; Koerner, John D; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, Shanmuganathan; Aarabi, Bizhan; Vialle, Luiz R; Fehlings, Michael G; Schroeder, Gregory D; Reinhold, Maximilian; Schnake, Klaus John; Bellabarba, Carlo; Cumhur Öner, F

    2016-04-01

    The aims of this study were (1) to demonstrate the AOSpine thoracolumbar spine injury classification system can be reliably applied by an international group of surgeons and (2) to delineate those injury types which are difficult for spine surgeons to classify reliably. A previously described classification system of thoracolumbar injuries which consists of a morphologic classification of the fracture, a grading system for the neurologic status and relevant patient-specific modifiers was applied to 25 cases by 100 spinal surgeons from across the world twice independently, in grading sessions 1 month apart. The results were analyzed for classification reliability using the Kappa coefficient (κ). The overall Kappa coefficient for all cases was 0.56, which represents moderate reliability. Kappa values describing interobserver agreement were 0.80 for type A injuries, 0.68 for type B injuries and 0.72 for type C injuries, all representing substantial reliability. The lowest level of agreement for specific subtypes was for fracture subtype A4 (Kappa = 0.19). Intraobserver analysis demonstrated overall average Kappa statistic for subtype grading of 0.68 also representing substantial reproducibility. In a worldwide sample of spinal surgeons without previous exposure to the recently described AOSpine Thoracolumbar Spine Injury Classification System, we demonstrated moderate interobserver and substantial intraobserver reliability. These results suggest that most spine surgeons can reliably apply this system to spine trauma patients as or more reliably than previously described systems.

  13. 77 FR 73334 - Adding International Energy Efficiency (IEE) Certificate to List of Certificates a Recognized...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-10

    ... Classification Society May Issue AGENCY: Coast Guard, DHS. ACTION: Final rule. SUMMARY: The Coast Guard is...) Certificate to the list of certificates that a recognized classification society may issue on behalf of the... January 1, 2013. This rule will enable recognized classification societies to apply to the Coast Guard to...

  14. Multivariate classification of the infrared spectra of cell and tissue samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haaland, D.M.; Jones, H.D.; Thomas, E.V.

    1997-03-01

    Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less

  15. Pet fur color and texture classification

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel

    2007-01-01

    Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.

  16. Spectral dependence of texture features integrated with hyperspectral data for area target classification improvement

    NASA Astrophysics Data System (ADS)

    Bangs, Corey F.; Kruse, Fred A.; Olsen, Chris R.

    2013-05-01

    Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the "Urban" class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the "Dirt Path" class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.

  17. Multispectral LiDAR Data for Land Cover Classification of Urban Areas

    PubMed Central

    Morsy, Salem; Shaker, Ahmed; El-Rabbany, Ahmed

    2017-01-01

    Airborne Light Detection And Ranging (LiDAR) systems usually operate at a monochromatic wavelength measuring the range and the strength of the reflected energy (intensity) from objects. Recently, multispectral LiDAR sensors, which acquire data at different wavelengths, have emerged. This allows for recording of a diversity of spectral reflectance from objects. In this context, we aim to investigate the use of multispectral LiDAR data in land cover classification using two different techniques. The first is image-based classification, where intensity and height images are created from LiDAR points and then a maximum likelihood classifier is applied. The second is point-based classification, where ground filtering and Normalized Difference Vegetation Indices (NDVIs) computation are conducted. A dataset of an urban area located in Oshawa, Ontario, Canada, is classified into four classes: buildings, trees, roads and grass. An overall accuracy of up to 89.9% and 92.7% is achieved from image classification and 3D point classification, respectively. A radiometric correction model is also applied to the intensity data in order to remove the attenuation due to the system distortion and terrain height variation. The classification process is then repeated, and the results demonstrate that there are no significant improvements achieved in the overall accuracy. PMID:28445432

  18. Multispectral LiDAR Data for Land Cover Classification of Urban Areas.

    PubMed

    Morsy, Salem; Shaker, Ahmed; El-Rabbany, Ahmed

    2017-04-26

    Airborne Light Detection And Ranging (LiDAR) systems usually operate at a monochromatic wavelength measuring the range and the strength of the reflected energy (intensity) from objects. Recently, multispectral LiDAR sensors, which acquire data at different wavelengths, have emerged. This allows for recording of a diversity of spectral reflectance from objects. In this context, we aim to investigate the use of multispectral LiDAR data in land cover classification using two different techniques. The first is image-based classification, where intensity and height images are created from LiDAR points and then a maximum likelihood classifier is applied. The second is point-based classification, where ground filtering and Normalized Difference Vegetation Indices (NDVIs) computation are conducted. A dataset of an urban area located in Oshawa, Ontario, Canada, is classified into four classes: buildings, trees, roads and grass. An overall accuracy of up to 89.9% and 92.7% is achieved from image classification and 3D point classification, respectively. A radiometric correction model is also applied to the intensity data in order to remove the attenuation due to the system distortion and terrain height variation. The classification process is then repeated, and the results demonstrate that there are no significant improvements achieved in the overall accuracy.

  19. Development of a template for the classification of traditional medical knowledge in Korea.

    PubMed

    Kim, Sungha; Kim, Boyoung; Mun, Sujeong; Park, Jeong Hwan; Kim, Min-Kyeoung; Choi, Sunmi; Lee, Sanghun

    2016-02-03

    Traditional Medical Knowledge (TMK) is a form of Traditional Knowledge associated with medicine that is handed down orally or by written material. There are efforts to document TMK, and make database to conserve Traditional Medicine and facilitate future research to validate traditional use. Despite of these efforts, there is no widely accepted template in data file format that is specific for TMK and, at the same time, helpful for understanding and organizing TMK. We aimed to develop a template to classify TMK. First, we reviewed books, articles, and health-related classification systems, and used focus group discussion to establish the definition, scope, and constituents of TMK. Second, we developed an initial version of the template to classify TMK, and applied it to TMK data. Third, we revised the template, based on the results of the initial template and input from experts, and applied it to the data. We developed the template for classification of TMK. The constituents of the template were summary, properties, tools/ingredients, indication/preparation/application, and international standard classification. We applied International Patent Classification, International Classification of Diseases (Korea version), and Classification of Korean Traditional Knowledge Resources to provide legal protection of TMK and facilitate academic research. The template provides standard terms for ingredients, preparation, administration route, and procedure method to assess safety and efficacy. This is the first template that is specialized for TMK for arranging and classifying TMK. The template would have important roles in preserving TMK, and protecting intellectual property. TMK data classified with the template could be used as the preliminary data to screen potential candidates for new pharmaceuticals. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  1. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  2. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  3. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  4. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  5. Neural network approaches versus statistical methods in classification of multisource remote sensing data

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.

    1990-01-01

    Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.

  6. Classification of change detection and change blindness from near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Katura, Takusige

    2011-08-01

    Using a machine-learning classification algorithm applied to near-infrared spectroscopy (NIRS) signals, we classify a success (change detection) or a failure (change blindness) in detecting visual changes for a change-detection task. Five subjects perform a change-detection task, and their brain activities are continuously monitored. A support-vector-machine algorithm is applied to classify the change-detection and change-blindness trials, and correct classification probability of 70-90% is obtained for four subjects. Two types of temporal shapes in classification probabilities are found: one exhibiting a maximum value after the task is completed (postdictive type), and another exhibiting a maximum value during the task (predictive type). As for the postdictive type, the classification probability begins to increase immediately after the task completion and reaches its maximum in about the time scale of neuronal hemodynamic response, reflecting a subjective report of change detection. As for the predictive type, the classification probability shows an increase at the task initiation and is maximal while subjects are performing the task, predicting the task performance in detecting a change. We conclude that decoding change detection and change blindness from NIRS signal is possible and argue some future applications toward brain-machine interfaces.

  7. 40 CFR 51.904 - How do the classification and attainment date provisions in section 172(a) of subpart 1 of the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b)? (a) Classification. The Administrator may classify an area subject to § 51.902(b) as an overwhelming...

  8. 40 CFR 51.904 - How do the classification and attainment date provisions in section 172(a) of subpart 1 of the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b)? (a) Classification. The Administrator may classify an area subject to § 51.902(b) as an overwhelming...

  9. 40 CFR 51.904 - How do the classification and attainment date provisions in section 172(a) of subpart 1 of the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b)? (a) Classification. The Administrator may classify an area subject to § 51.902(b) as an overwhelming...

  10. 40 CFR 51.904 - How do the classification and attainment date provisions in section 172(a) of subpart 1 of the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b)? (a) Classification. The Administrator may classify an area subject to § 51.902(b) as an overwhelming...

  11. 40 CFR 51.904 - How do the classification and attainment date provisions in section 172(a) of subpart 1 of the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b... attainment date provisions in section 172(a) of subpart 1 of the CAA apply to areas subject to § 51.902(b)? (a) Classification. The Administrator may classify an area subject to § 51.902(b) as an overwhelming...

  12. Some sequential, distribution-free pattern classification procedures with applications

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1971-01-01

    Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.

  13. A comprehensive simulation study on classification of RNA-Seq data.

    PubMed

    Zararsız, Gökmen; Goksuluk, Dincer; Korkmaz, Selcuk; Eldem, Vahap; Zararsiz, Gozde Erturk; Duru, Izzet Parug; Ozturk, Ahmet

    2017-01-01

    RNA sequencing (RNA-Seq) is a powerful technique for the gene-expression profiling of organisms that uses the capabilities of next-generation sequencing technologies. Developing gene-expression-based classification algorithms is an emerging powerful method for diagnosis, disease classification and monitoring at molecular level, as well as providing potential markers of diseases. Most of the statistical methods proposed for the classification of gene-expression data are either based on a continuous scale (eg. microarray data) or require a normal distribution assumption. Hence, these methods cannot be directly applied to RNA-Seq data since they violate both data structure and distributional assumptions. However, it is possible to apply these algorithms with appropriate modifications to RNA-Seq data. One way is to develop count-based classifiers, such as Poisson linear discriminant analysis and negative binomial linear discriminant analysis. Another way is to bring the data closer to microarrays and apply microarray-based classifiers. In this study, we compared several classifiers including PLDA with and without power transformation, NBLDA, single SVM, bagging SVM (bagSVM), classification and regression trees (CART), and random forests (RF). We also examined the effect of several parameters such as overdispersion, sample size, number of genes, number of classes, differential-expression rate, and the transformation method on model performances. A comprehensive simulation study is conducted and the results are compared with the results of two miRNA and two mRNA experimental datasets. The results revealed that increasing the sample size, differential-expression rate and decreasing the dispersion parameter and number of groups lead to an increase in classification accuracy. Similar with differential-expression studies, the classification of RNA-Seq data requires careful attention when handling data overdispersion. We conclude that, as a count-based classifier, the power transformed PLDA and, as a microarray-based classifier, vst or rlog transformed RF and SVM classifiers may be a good choice for classification. An R/BIOCONDUCTOR package, MLSeq, is freely available at https://www.bioconductor.org/packages/release/bioc/html/MLSeq.html.

  14. 40 CFR 257.3-7 - Air.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR CLASSIFICATION OF SOLID WASTE DISPOSAL FACILITIES AND PRACTICES Classification of Solid Waste Disposal Facilities... residential, commercial, institutional or industrial solid waste. This requirement does not apply to...

  15. 40 CFR 257.3-7 - Air.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR CLASSIFICATION OF SOLID WASTE DISPOSAL FACILITIES AND PRACTICES Classification of Solid Waste Disposal Facilities... residential, commercial, institutional or industrial solid waste. This requirement does not apply to...

  16. Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.

    PubMed

    Kim, Eunwoo; Park, HyunWook

    2017-02-01

    The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.

  17. A Discriminative Approach to EEG Seizure Detection

    PubMed Central

    Johnson, Ashley N.; Sow, Daby; Biem, Alain

    2011-01-01

    Seizures are abnormal sudden discharges in the brain with signatures represented in electroencephalograms (EEG). The efficacy of the application of speech processing techniques to discriminate between seizure and non-seizure states in EEGs is reported. The approach accounts for the challenges of unbalanced datasets (seizure and non-seizure), while also showing a system capable of real-time seizure detection. The Minimum Classification Error (MCE) algorithm, which is a discriminative learning algorithm with wide-use in speech processing, is applied and compared with conventional classification techniques that have already been applied to the discrimination between seizure and non-seizure states in the literature. The system is evaluated on 22 pediatric patients multi-channel EEG recordings. Experimental results show that the application of speech processing techniques and MCE compare favorably with conventional classification techniques in terms of classification performance, while requiring less computational overhead. The results strongly suggests the possibility of deploying the designed system at the bedside. PMID:22195192

  18. Predicting Flavonoid UGT Regioselectivity

    PubMed Central

    Jackson, Rhydon; Knisley, Debra; McIntosh, Cecilia; Pfeiffer, Phillip

    2011-01-01

    Machine learning was applied to a challenging and biologically significant protein classification problem: the prediction of avonoid UGT acceptor regioselectivity from primary sequence. Novel indices characterizing graphical models of residues were proposed and found to be widely distributed among existing amino acid indices and to cluster residues appropriately. UGT subsequences biochemically linked to regioselectivity were modeled as sets of index sequences. Several learning techniques incorporating these UGT models were compared with classifications based on standard sequence alignment scores. These techniques included an application of time series distance functions to protein classification. Time series distances defined on the index sequences were used in nearest neighbor and support vector machine classifiers. Additionally, Bayesian neural network classifiers were applied to the index sequences. The experiments identified improvements over the nearest neighbor and support vector machine classifications relying on standard alignment similarity scores, as well as strong correlations between specific subsequences and regioselectivities. PMID:21747849

  19. Fuzzy support vector machine for microarray imbalanced data classification

    NASA Astrophysics Data System (ADS)

    Ladayya, Faroh; Purnami, Santi Wulan; Irhamah

    2017-11-01

    DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.

  20. Hydrologic Landscape Characterization for the Pacific Northwest, USA

    EPA Science Inventory

    Hydrologic classification can help address some of the challenges facing catchment hydrology. Wigington et al. (2013) developed a hydrologic landscape (HL) approach to classification that was applied to the state of Oregon. Several characteristics limited its applicability outs...

  1. 40 CFR 11.1 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Purpose. 11.1 Section 11.1 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SECURITY CLASSIFICATION REGULATIONS PURSUANT... the classification and declassification of national security information. They apply also to...

  2. Motor Oil Classification using Color Histograms and Pattern Recognition Techniques.

    PubMed

    Ahmadi, Shiva; Mani-Varnosfaderani, Ahmad; Habibi, Biuck

    2018-04-20

    Motor oil classification is important for quality control and the identification of oil adulteration. In thiswork, we propose a simple, rapid, inexpensive and nondestructive approach based on image analysis and pattern recognition techniques for the classification of nine different types of motor oils according to their corresponding color histograms. For this, we applied color histogram in different color spaces such as red green blue (RGB), grayscale, and hue saturation intensity (HSI) in order to extract features that can help with the classification procedure. These color histograms and their combinations were used as input for model development and then were statistically evaluated by using linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) techniques. Here, two common solutions for solving a multiclass classification problem were applied: (1) transformation to binary classification problem using a one-against-all (OAA) approach and (2) extension from binary classifiers to a single globally optimized multilabel classification model. In the OAA strategy, LDA, QDA, and SVM reached up to 97% in terms of accuracy, sensitivity, and specificity for both the training and test sets. In extension from binary case, despite good performances by the SVM classification model, QDA and LDA provided better results up to 92% for RGB-grayscale-HSI color histograms and up to 93% for the HSI color map, respectively. In order to reduce the numbers of independent variables for modeling, a principle component analysis algorithm was used. Our results suggest that the proposed method is promising for the identification and classification of different types of motor oils.

  3. Protein classification based on text document classification techniques.

    PubMed

    Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith

    2005-03-01

    The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.

  4. Dynamic species classification of microorganisms across time, abiotic and biotic environments—A sliding window approach

    PubMed Central

    Griffiths, Jason I.; Fronhofer, Emanuel A.; Garnier, Aurélie; Seymour, Mathew; Altermatt, Florian; Petchey, Owen L.

    2017-01-01

    The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML) algorithms into meaningful ecological information. ML uses user defined classes (e.g. species), derived from a subset (i.e. training data) of video-observed quantitative features (e.g. phenotypic variation), to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our classification pipeline can be applied in fields assessing species community dynamics, such as eco-toxicology, ecology and evolutionary ecology. PMID:28472193

  5. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  6. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  7. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  8. [Tobacco quality analysis of industrial classification of different years using near-infrared (NIR) spectrum].

    PubMed

    Wang, Yi; Xiang, Ma; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui

    2012-11-01

    In this study, tobacco quality analysis of main Industrial classification of different years was carried out applying spectrum projection and correlation methods. The group of data was near-infrared (NIR) spectrum from Hongta Tobacco (Group) Co., Ltd. 5730 tobacco leaf Industrial classification samples from Yuxi in Yunnan province from 2007 to 2010 year were collected using near infrared spectroscopy, which from different parts and colors and all belong to tobacco varieties of HONGDA. The conclusion showed that, when the samples were divided to two part by the ratio of 2:1 randomly as analysis and verification sets in the same year, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients were above 0.98. The correlation coefficients between two different years applying spectrum projection were above 0.97. The highest correlation coefficient was the one between 2008 and 2009 year and the lowest correlation coefficient was the one between 2007 and 2010 year. At the same time, The study discussed a method to get the quantitative similarity values of different industrial classification samples. The similarity and consistency values were instructive in combination and replacement of tobacco leaf blending.

  9. Comparison and critical appraisal of dengue clinical guidelines and their use in Asia and Latin America.

    PubMed

    Santamaria, R; Martinez, E; Kratochwill, S; Soria, C; Tan, L H; Nuñez, A; Dimaano, E; Villegas, E; Bendezú, H; Kroeger, A; Castelobranco, I; Siqueira, J B; Jaenisch, T; Horstick, O; Lum, L C S

    2009-12-01

    The World Health Organization (WHO) dengue classification scheme for dengue fever (DF) and dengue haemorrhagic fever (DHF)/dengue shock syndrome (DSS) has been adopted as the standard for diagnosis, clinical management and reporting. In recent years, difficulties in applying the WHO case classification have been reported in several countries. A multicenter study was carried out in Asia and Latin America to analyze the variation and utility of dengue clinical guidelines (DCGs) taking as reference the WHO/PAHO guidelines (1994) and the WHO/SEARO guidelines (1998). A document analysis of 13 dengue guidelines was followed by a questionnaire and Focus Group discussions (FGDs) with 858 health care providers in seven countries. Differences in DCGs of the 13 countries were identified including the concept of warning signs, case classification, use of treatment algorithms and grading into levels of severity. The questionnaires and FGDs revealed (1) inaccessibility of DCGs, (2) lack of training, (3) insufficient number of staff to correctly apply the DCGs at the frontline and (4) the unavailability of diagnostic tests. The differences of the DCGs and the inconsistency in their application suggest a need to re-evaluate and standardise DCGs. This applies especially to case classification and case management.

  10. Applying Cost-Sensitive Extreme Learning Machine and Dissimilarity Integration to Gene Expression Data Classification.

    PubMed

    Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin

    2016-01-01

    Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.

  11. Integration of Chinese medicine with Western medicine could lead to future medicine: molecular module medicine.

    PubMed

    Zhang, Chi; Zhang, Ge; Chen, Ke-ji; Lu, Ai-ping

    2016-04-01

    The development of an effective classification method for human health conditions is essential for precise diagnosis and delivery of tailored therapy to individuals. Contemporary classification of disease systems has properties that limit its information content and usability. Chinese medicine pattern classification has been incorporated with disease classification, and this integrated classification method became more precise because of the increased understanding of the molecular mechanisms. However, we are still facing the complexity of diseases and patterns in the classification of health conditions. With continuing advances in omics methodologies and instrumentation, we are proposing a new classification approach: molecular module classification, which is applying molecular modules to classifying human health status. The initiative would be precisely defining the health status, providing accurate diagnoses, optimizing the therapeutics and improving new drug discovery strategy. Therefore, there would be no current disease diagnosis, no disease pattern classification, and in the future, a new medicine based on this classification, molecular module medicine, could redefine health statuses and reshape the clinical practice.

  12. Standardizing texture and facies codes for a process-based classification of clastic sediment and rock

    USGS Publications Warehouse

    Farrell, K.M.; Harris, W.B.; Mallinson, D.J.; Culver, S.J.; Riggs, S.R.; Pierson, J.; ,; Lautier, J.C.

    2012-01-01

    Proposed here is a universally applicable, texturally based classification of clastic sediment that is independent from composition, cementation, and geologic environment, is closely allied to process sedimentology, and applies to all compartments in the source-to-sink system. The classification is contingent on defining the term "clastic" so that it is independent from composition or origin and includes any particles or grains that are subject to erosion, transportation, and deposition. Modifications to Folk's (1980) texturally based classification that include applying new assumptions and defining a broader array of textural fields are proposed to accommodate this. The revised ternary diagrams include additional textural fields that better define poorly sorted and coarse-grained deposits, so that all end members (gravel, sand, and mud size fractions) are included in textural codes. Revised textural fields, or classes, are based on a strict adherence to volumetric estimates of percentages of gravel, sand, and mud size grain populations, which by definition must sum to 100%. The new classification ensures that descriptors are applied consistently to all end members in the ternary diagram (gravel, sand, and mud) according to several rules, and that none of the end members are ignored. These modifications provide bases for standardizing vertical displays of texture in graphic logs, lithofacies codes, and their derivatives- hydrofacies. Hydrofacies codes are nondirectional permeability indicators that predict aquifer or reservoir potential. Folk's (1980) ternary diagram for fine-grained clastic sediments (sand, silt, and clay size fractions) is also revised to preserve consistency with the revised diagram for gravel, sand, and mud. Standardizing texture ensures that the principles of process sedimentology are consistently applied to compositionally variable rock sequences, such as mixed carbonate-siliciclastic ramp settings, and the extreme ends of depositional systems.

  13. Non-linear molecular pattern classification using molecular beacons with multiple targets.

    PubMed

    Lee, In-Hee; Lee, Seung Hwan; Park, Tai Hyun; Zhang, Byoung-Tak

    2013-12-01

    In vitro pattern classification has been highlighted as an important future application of DNA computing. Previous work has demonstrated the feasibility of linear classifiers using DNA-based molecular computing. However, complex tasks require non-linear classification capability. Here we design a molecular beacon that can interact with multiple targets and experimentally shows that its fluorescent signals form a complex radial-basis function, enabling it to be used as a building block for non-linear molecular classification in vitro. The proposed method was successfully applied to solving artificial and real-world classification problems: XOR and microRNA expression patterns. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Evaluation of results of US corn and soybeans exploratory experiment: Classification procedures verification test. [Missouri, Iowa, Indiana, and Illinois

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.; Baird, J. E. (Principal Investigator)

    1980-01-01

    The classification procedure utilized in making crop proportion estimates for corn and soybeans using remotely sensed data was evaluated. The procedure was derived during the transition year of the Large Area Crop Inventory Experiment. Analysis of variance techniques were applied to classifications performed by 3 groups of analysts who processed 25 segments selected from 4 agrophysical units (APU's). Group and APU effects were assessed to determine factors which affected the quality of the classifications. The classification results were studied to determine the effectiveness of the procedure in producing corn and soybeans proportion estimates.

  15. An information-based network approach for protein classification

    PubMed Central

    Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.

    2017-01-01

    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835

  16. Tissue classification for laparoscopic image understanding based on multispectral texture analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena

    2016-03-01

    Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.

  17. CHANGING OUR DIAGNOSTIC PARADIGM: MOVEMENT SYSTEM DIAGNOSTIC CLASSIFICATION

    PubMed Central

    Kamonseki, Danilo H.; Staker, Justin L.; Lawrence, Rebekah L.; Braman, Jonathan P.

    2017-01-01

    Proper diagnosis is a first step in applying best available treatments, and prognosticating outcomes for clients. Currently, the majority of musculoskeletal diagnoses are classified according to pathoanatomy. However, the majority of physical therapy treatments are applied toward movement system impairments or pain. While advocated within the physical therapy profession for over thirty years, diagnostic classification within a movement system framework has not been uniformly developed or adopted. We propose a basic framework and rationale for application of a movement system diagnostic classification for atraumatic shoulder pain conditions, as a case for the broader development of movement system diagnostic labels. Shifting our diagnostic paradigm has potential to enhance communication, improve educational efficiency, facilitate research, directly link to function, improve clinical care, and accelerate preventive interventions. PMID:29158950

  18. Mapping and monitoring changes in vegetation communities of Jasper Ridge, CA, using spectral fractions derived from AVIRIS images

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.

    1993-01-01

    An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.

  19. En bloc prefabrication of vascularized bioartificial bone grafts in sheep and complete workflow for custom-made transplants.

    PubMed

    Kokemüller, H; Jehn, P; Spalthoff, S; Essig, H; Tavassol, F; Schumann, P; Andreae, A; Nolte, I; Jagodzinski, M; Gellrich, N-C

    2014-02-01

    The aim of this pilot study was to determine, in a new experimental model, whether complex bioartificial monoblocs of relevant size and stability can be prefabricated in a defined three-dimensional design, in which the latissimus dorsi muscle serves as a natural bioreactor and the thoracodorsal vessel tree is prepared for axial construct perfusion. Eighteen sheep were included in the study, with six animals in each of three experimental groups. Vitalization of the β-tricalcium phosphate-based constructs was performed by direct application of unmodified osteogenic material from the iliac crest (group A), in vivo application of nucleated cell concentrate (NCC) from bone marrow aspirate (group B), and in vitro cultivation of bone marrow stromal cells (BMSC) in a perfusion bioreactor system (group C). The contours of the constructs were designed digitally and transferred onto the bioartificial bone grafts using a titanium cage, which was bent over a stereolithographic model of the defined subvolume intraoperatively. At the end of the prefabrication process, only the axial vascularized constructs of group A demonstrated vital bone formation with considerable stability. In groups B and C, the applied techniques were not able to induce ectopic bone formation. The presented computer-assisted workflow allows the prefabrication of custom-made bioartificial transplants. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. A Ternary Hybrid EEG-NIRS Brain-Computer Interface for the Classification of Brain Activation Patterns during Mental Arithmetic, Motor Imagery, and Idle State.

    PubMed

    Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan

    2018-01-01

    The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.

  1. Classification of earth terrain using polarimetric synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Lim, H. H.; Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Shin, R. T.; Van Zyl, J. J.

    1989-01-01

    Supervised and unsupervised classification techniques are developed and used to classify the earth terrain components from SAR polarimetric images of San Francisco Bay and Traverse City, Michigan. The supervised techniques include the Bayes classifiers, normalized polarimetric classification, and simple feature classification using discriminates such as the absolute and normalized magnitude response of individual receiver channel returns and the phase difference between receiver channels. An algorithm is developed as an unsupervised technique which classifies terrain elements based on the relationship between the orientation angle and the handedness of the transmitting and receiving polariation states. It is found that supervised classification produces the best results when accurate classifier training data are used, while unsupervised classification may be applied when training data are not available.

  2. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  3. Classification and Accuracy Assessment for Coarse Resolution Mapping within the Great Lakes Basin, USA

    EPA Science Inventory

    This study applied a phenology-based land-cover classification approach across the Laurentian Great Lakes Basin (GLB) using time-series data consisting of 23 Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) composite images (250 ...

  4. Comparing the MRI-based Goutallier Classification to an experimental quantitative MR spectroscopic fat measurement of the supraspinatus muscle.

    PubMed

    Gilbert, Fabian; Böhm, Dirk; Eden, Lars; Schmalzl, Jonas; Meffert, Rainer H; Köstler, Herbert; Weng, Andreas M; Ziegler, Dirk

    2016-08-22

    The Goutallier Classification is a semi quantitative classification system to determine the amount of fatty degeneration in rotator cuff muscles. Although initially proposed for axial computer tomography scans it is currently applied to magnet-resonance-imaging-scans. The role for its clinical use is controversial, as the reliability of the classification has been shown to be inconsistent. The purpose of this study was to compare the semi quantitative MRI-based Goutallier Classification applied by 5 different raters to experimental MR spectroscopic quantitative fat measurement in order to determine the correlation between this classification system and the true extent of fatty degeneration shown by spectroscopy. MRI-scans of 42 patients with rotator cuff tears were examined by 5 shoulder surgeons and were graduated according to the MRI-based Goutallier Classification proposed by Fuchs et al. Additionally the fat/water ratio was measured with MR spectroscopy using the experimental SPLASH technique. The semi quantitative grading according to the Goutallier Classification was statistically correlated with the quantitative measured fat/water ratio using Spearman's rank correlation. Statistical analysis of the data revealed only fair correlation of the Goutallier Classification system and the quantitative fat/water ratio with R = 0.35 (p < 0.05). By dichotomizing the scale the correlation was 0.72. The interobserver and intraobserver reliabilities were substantial with R = 0.62 and R = 0.74 (p < 0.01). The correlation between the semi quantitative MRI based Goutallier Classification system and MR spectroscopic fat measurement is weak. As an adequate estimation of fatty degeneration based on standard MRI may not be possible, quantitative methods need to be considered in order to increase diagnostic safety and thus provide patients with ideal care in regard to the amount of fatty degeneration. Spectroscopic MR measurement may increase the accuracy of the Goutallier classification and thus improve the prediction of clinical results after rotator cuff repair. However, these techniques are currently only available in an experimental setting.

  5. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.

  6. An inter-comparison of similarity-based methods for organisation and classification of groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Haaf, Ezra; Barthel, Roland

    2018-04-01

    Classification and similarity based methods, which have recently received major attention in the field of surface water hydrology, namely through the PUB (prediction in ungauged basins) initiative, have not yet been applied to groundwater systems. However, it can be hypothesised, that the principle of "similar systems responding similarly to similar forcing" applies in subsurface hydrology as well. One fundamental prerequisite to test this hypothesis and eventually to apply the principle to make "predictions for ungauged groundwater systems" is efficient methods to quantify the similarity of groundwater system responses, i.e. groundwater hydrographs. In this study, a large, spatially extensive, as well as geologically and geomorphologically diverse dataset from Southern Germany and Western Austria was used, to test and compare a set of 32 grouping methods, which have previously only been used individually in local-scale studies. The resulting groupings are compared to a heuristic visual classification, which serves as a baseline. A performance ranking of these classification methods is carried out and differences in homogeneity of grouping results were shown, whereby selected groups were related to hydrogeological indices and geological descriptors. This exploratory empirical study shows that the choice of grouping method has a large impact on the object distribution within groups, as well as on the homogeneity of patterns captured in groups. The study provides a comprehensive overview of a large number of grouping methods, which can guide researchers when attempting similarity-based groundwater hydrograph classification.

  7. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    PubMed

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. On the impact of different approaches to classify age-related macular degeneration: Results from the German AugUR study.

    PubMed

    Brandl, Caroline; Zimmermann, Martina E; Günther, Felix; Barth, Teresa; Olden, Matthias; Schelter, Sabine C; Kronenberg, Florian; Loss, Julika; Küchenhoff, Helmut; Helbig, Horst; Weber, Bernhard H F; Stark, Klaus J; Heid, Iris M

    2018-06-06

    While age-related macular degeneration (AMD) poses an important personal and public health burden, comparing epidemiological studies on AMD is hampered by differing approaches to classify AMD. In our AugUR study survey, recruiting residents from in/around Regensburg, Germany, aged 70+, we analyzed the AMD status derived from color fundus images applying two different classification systems. Based on 1,040 participants with gradable fundus images for at least one eye, we show that including individuals with only one gradable eye (n = 155) underestimates AMD prevalence and we provide a correction procedure. Bias-corrected and standardized to the Bavarian population, late AMD prevalence is 7.3% (95% confidence interval = [5.4; 9.4]). We find substantially different prevalence estimates for "early/intermediate AMD" depending on the classification system: 45.3% (95%-CI = [41.8; 48.7]) applying the Clinical Classification (early/intermediate AMD) or 17.1% (95%-CI = [14.6; 19.7]) applying the Three Continent AMD Consortium Severity Scale (mild/moderate/severe early AMD). We thus provide a first effort to grade AMD in a complete study with different classification systems, a first approach for bias-correction from individuals with only one gradable eye, and the first AMD prevalence estimates from a German elderly population. Our results underscore substantial differences for early/intermediate AMD prevalence estimates between classification systems and an urgent need for harmonization.

  9. Balancing research and funding using value of information and portfolio tools for nanomaterial risk classification

    NASA Astrophysics Data System (ADS)

    Bates, Matthew E.; Keisler, Jeffrey M.; Zussblatt, Niels P.; Plourde, Kenton J.; Wender, Ben A.; Linkov, Igor

    2016-02-01

    Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis—methods commonly applied in financial and operations management—to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios—combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.

  10. Balancing research and funding using value of information and portfolio tools for nanomaterial risk classification.

    PubMed

    Bates, Matthew E; Keisler, Jeffrey M; Zussblatt, Niels P; Plourde, Kenton J; Wender, Ben A; Linkov, Igor

    2016-02-01

    Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis-methods commonly applied in financial and operations management-to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios-combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.

  11. 40 CFR 51.900 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... higher or lower, classifications are ranked from lowest to highest as follows: classification under... National Ambient Air Quality Standard § 51.900 Definitions. The following definitions apply for purposes of... 42 U.S.C. 7401-7671q (2003). (f) Applicable requirements means for an area the following requirements...

  12. 40 CFR 51.900 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... higher or lower, classifications are ranked from lowest to highest as follows: classification under... National Ambient Air Quality Standard § 51.900 Definitions. The following definitions apply for purposes of... 42 U.S.C. 7401-7671q (2003). (f) Applicable requirements means for an area the following requirements...

  13. Semi-supervised vibration-based classification and condition monitoring of compressors

    NASA Astrophysics Data System (ADS)

    Potočnik, Primož; Govekar, Edvard

    2017-09-01

    Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.

  14. Controlling protected designation of origin of wine by Raman spectroscopy.

    PubMed

    Mandrile, Luisa; Zeppa, Giuseppe; Giovannozzi, Andrea Mario; Rossi, Andrea Mario

    2016-11-15

    In this paper, a Fourier Transform Raman spectroscopy method, to authenticate the provenience of wine, for food traceability applications was developed. In particular, due to the specific chemical fingerprint of the Raman spectrum, it was possible to discriminate different wines produced in the Piedmont area (North West Italy) in accordance with i) grape varieties, ii) production area and iii) ageing time. In order to create a consistent training set, more than 300 samples from tens of different producers were analyzed, and a chemometric treatment of raw spectra was applied. A discriminant analysis method was employed in the classification procedures, providing a classification capability (percentage of correct answers) of 90% for validation of grape analysis and geographical area provenance, and a classification capability of 84% for ageing time classification. The present methodology was applied successfully to raw materials without any preliminary treatment of the sample, providing a response in a very short time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A Pruning Neural Network Model in Credit Classification Analysis

    PubMed Central

    Tang, Yajiao; Ji, Junkai; Dai, Hongwei; Yu, Yang; Todo, Yuki

    2018-01-01

    Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs) have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN) and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs) to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency. PMID:29606961

  16. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  17. Failure-probability driven dose painting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.

    Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). Themore » total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.« less

  18. Differences in Normal Tissue Response in the Esophagus Between Proton and Photon Radiation Therapy for Non-Small Cell Lung Cancer Using In Vivo Imaging Biomarkers.

    PubMed

    Niedzielski, Joshua S; Yang, Jinzhong; Mohan, Radhe; Titt, Uwe; Mirkovic, Dragan; Stingo, Francesco; Liao, Zhongxing; Gomez, Daniel R; Martel, Mary K; Briere, Tina M; Court, Laurence E

    2017-11-15

    To determine whether there exists any significant difference in normal tissue toxicity between intensity modulated radiation therapy (IMRT) or proton therapy for the treatment of non-small cell lung cancer. A total of 134 study patients (n=49 treated with proton therapy, n=85 with IMRT) treated in a randomized trial had a previously validated esophageal toxicity imaging biomarker, esophageal expansion, quantified during radiation therapy, as well as esophagitis grade (Common Terminology Criteria for Adverse Events version 3.0), on a weekly basis during treatment. Differences between the 2 modalities were statically analyzed using the imaging biomarker metric value (Kruskal-Wallis analysis of variance), as well as the incidence and severity of esophagitis grade (χ 2 and Fisher exact tests, respectively). The dose-response of the imaging biomarker was also compared between modalities using esophageal equivalent uniform dose, as well as delivered dose to an isotropic esophageal subvolume. No statistically significant difference in the distribution of esophagitis grade, the incidence of grade ≥3 esophagitis (15 and 11 patients treated with IMRT and proton therapy, respectively), or the esophageal expansion imaging biomarker between cohorts (P>.05) was found. The distribution of imaging biomarker metric values had similar distributions between treatment arms, despite a slightly higher dose volume in the proton arm (P>.05). Imaging biomarker dose-response was similar between modalities for dose quantified as esophageal equivalent uniform dose and delivered esophageal subvolume dose. Regardless of treatment modality, there was high variability in imaging biomarker response, as well as esophagitis grade, for similar esophageal doses between patients. There was no significant difference in esophageal toxicity from either proton- or photon-based radiation therapy as quantified by esophagitis grade or the esophageal expansion imaging biomarker. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Clinical applications of image guided-intensity modulated radiation therapy (IG-IMRT) for conformal avoidance of normal tissue

    NASA Astrophysics Data System (ADS)

    Gutierrez, Alonso Navar

    2007-12-01

    Recent improvements in imaging technology and radiation delivery have led to the development of advanced treatment techniques in radiotherapy which have opened the door for novel therapeutic approaches to improve the efficacy of radiation cancer treatments. Among these advances is image-guided, intensity modulated radiation therapy (IG-IMRT), in which imaging is incorporated to aid in inter-/intra-fractional target localization and to ensure accurate delivery of precise and highly conformal dose distributions. In principle, clinical implementation of IG-IMRT should improve normal tissue sparing and permit effective biological dose escalation thus widening the radiation therapeutic window and lead to increases in survival through improved local control of primary neoplastic diseases. Details of the development of three clinical applications made possible solely with IG-IMRT radiation delivery techniques are presented: (1) Laparoscopically implanted tissue expander radiotherapy (LITE-RT) has been developed to enhance conformal avoidance of normal tissue during the treatment of intra-abdominopelvic cancers. LITE-RT functions by geometrically displacing surrounding normal tissue and isolating the target volume through the interfractional inflation of a custom-shaped tissue expander throughout the course of treatment. (2) The unique delivery geometry of helical tomotherapy, a novel form of IG-IMRT, enables the delivery of composite treatment plan m which whole brain radiotherapy (WBRT) with hippocampal avoidance, hypothesized to reduce the risk of memory function decline and improve the patient's quality of life, and simultaneously integrated boost to multiple brain metastases to improve intracranial tumor control is achieved. (3) Escalation of biological dose to targets through integrated, selective subvolume boosts have been shown to efficiently increase tumor dose without significantly increasing normal tissue dose. Helical tomotherapy was used to investigate the feasibility of delivering a simultaneously integrated subvolume boost to canine nasal tumors and was found to dramatically increase estimated 1-year tumor control probability (TCP) without increasing the dose to the eyes, so as to preserve vision, and to the brain, so as to prevent neuropathy.

  20. Resolving anthropogenic aerosol pollution types - deconvolution and exploratory classification of pollution events

    NASA Astrophysics Data System (ADS)

    Äijälä, Mikko; Heikkinen, Liine; Fröhlich, Roman; Canonaco, Francesco; Prévôt, André S. H.; Junninen, Heikki; Petäjä, Tuukka; Kulmala, Markku; Worsnop, Douglas; Ehn, Mikael

    2017-03-01

    Mass spectrometric measurements commonly yield data on hundreds of variables over thousands of points in time. Refining and synthesizing this raw data into chemical information necessitates the use of advanced, statistics-based data analytical techniques. In the field of analytical aerosol chemistry, statistical, dimensionality reductive methods have become widespread in the last decade, yet comparable advanced chemometric techniques for data classification and identification remain marginal. Here we present an example of combining data dimensionality reduction (factorization) with exploratory classification (clustering), and show that the results cannot only reproduce and corroborate earlier findings, but also complement and broaden our current perspectives on aerosol chemical classification. We find that applying positive matrix factorization to extract spectral characteristics of the organic component of air pollution plumes, together with an unsupervised clustering algorithm, k-means+ + , for classification, reproduces classical organic aerosol speciation schemes. Applying appropriately chosen metrics for spectral dissimilarity along with optimized data weighting, the source-specific pollution characteristics can be statistically resolved even for spectrally very similar aerosol types, such as different combustion-related anthropogenic aerosol species and atmospheric aerosols with similar degree of oxidation. In addition to the typical oxidation level and source-driven aerosol classification, we were also able to classify and characterize outlier groups that would likely be disregarded in a more conventional analysis. Evaluating solution quality for the classification also provides means to assess the performance of mass spectral similarity metrics and optimize weighting for mass spectral variables. This facilitates algorithm-based evaluation of aerosol spectra, which may prove invaluable for future development of automatic methods for spectra identification and classification. Robust, statistics-based results and data visualizations also provide important clues to a human analyst on the existence and chemical interpretation of data structures. Applying these methods to a test set of data, aerosol mass spectrometric data of organic aerosol from a boreal forest site, yielded five to seven different recurring pollution types from various sources, including traffic, cooking, biomass burning and nearby sawmills. Additionally, three distinct, minor pollution types were discovered and identified as amine-dominated aerosols.

  1. Development and application of a new comprehensive image-based classification scheme for coastal and benthic environments along the southeast Florida continental shelf

    NASA Astrophysics Data System (ADS)

    Makowski, Christopher

    The coastal (terrestrial) and benthic environments along the southeast Florida continental shelf show a unique biophysical succession of marine features from a highly urbanized, developed coastal region in the north (i.e. northern Miami-Dade County) to a protective marine sanctuary in the southeast (i.e. Florida Keys National Marine Sanctuary). However, the establishment of a standard bio-geomorphological classification scheme for this area of coastal and benthic environments is lacking. The purpose of this study was to test the hypothesis and answer the research question of whether new parameters of integrating geomorphological components with dominant biological covers could be developed and applied across multiple remote sensing platforms for an innovative way to identify, interpret, and classify diverse coastal and benthic environments along the southeast Florida continental shelf. An ordered manageable hierarchical classification scheme was developed to incorporate the categories of Physiographic Realm, Morphodynamic Zone, Geoform, Landform, Dominant Surface Sediment, and Dominant Biological Cover. Six different remote sensing platforms (i.e. five multi-spectral satellite image sensors and one high-resolution aerial orthoimagery) were acquired, delineated according to the new classification scheme, and compared to determine optimal formats for classifying the study area. Cognitive digital classification at a nominal scale of 1:6000 proved to be more accurate than autoclassification programs and therefore used to differentiate coastal marine environments based on spectral reflectance characteristics, such as color, tone, saturation, pattern, and texture of the seafloor topology. In addition, attribute tables were created in conjugation with interpretations to quantify and compare the spatial relationships between classificatory units. IKONOS-2 satellite imagery was determined to be the optimal platform for applying the hierarchical classification scheme. However, each remote sensing platform had beneficial properties depending on research goals, logistical restrictions, and financial support. This study concluded that a new hierarchical comprehensive classification scheme for identifying coastal marine environments along the southeast Florida continental shelf could be achieved by integrating geomorphological features with biological coverages. This newly developed scheme, which can be applied across multiple remote sensing platforms with GIS software, establishes an innovative classification protocol to be used in future research studies.

  2. Property Specification Patterns for intelligence building software

    NASA Astrophysics Data System (ADS)

    Chun, Seungsu

    2018-03-01

    In this paper, through the property specification pattern research for Modal MU(μ) logical aspects present a single framework based on the pattern of intelligence building software. In this study, broken down by state property specification pattern classification of Dwyer (S) and action (A) and was subdivided into it again strong (A) and weaknesses (E). Through these means based on a hierarchical pattern classification of the property specification pattern analysis of logical aspects Mu(μ) was applied to the pattern classification of the examples used in the actual model checker. As a result, not only can a more accurate classification than the existing classification systems were easy to create and understand the attributes specified.

  3. Using machine learning classifiers to assist healthcare-related decisions: classification of electronic patient records.

    PubMed

    Pollettini, Juliana T; Panico, Sylvia R G; Daneluzzi, Julio C; Tinós, Renato; Baranauskas, José A; Macedo, Alessandra A

    2012-12-01

    Surveillance Levels (SLs) are categories for medical patients (used in Brazil) that represent different types of medical recommendations. SLs are defined according to risk factors and the medical and developmental history of patients. Each SL is associated with specific educational and clinical measures. The objective of the present paper was to verify computer-aided, automatic assignment of SLs. The present paper proposes a computer-aided approach for automatic recommendation of SLs. The approach is based on the classification of information from patient electronic records. For this purpose, a software architecture composed of three layers was developed. The architecture is formed by a classification layer that includes a linguistic module and machine learning classification modules. The classification layer allows for the use of different classification methods, including the use of preprocessed, normalized language data drawn from the linguistic module. We report the verification and validation of the software architecture in a Brazilian pediatric healthcare institution. The results indicate that selection of attributes can have a great effect on the performance of the system. Nonetheless, our automatic recommendation of surveillance level can still benefit from improvements in processing procedures when the linguistic module is applied prior to classification. Results from our efforts can be applied to different types of medical systems. The results of systems supported by the framework presented in this paper may be used by healthcare and governmental institutions to improve healthcare services in terms of establishing preventive measures and alerting authorities about the possibility of an epidemic.

  4. Real-Time Subject-Independent Pattern Classification of Overt and Covert Movements from fNIRS Signals

    PubMed Central

    Rana, Mohit; Prasad, Vinod A.; Guan, Cuntai; Birbaumer, Niels; Sitaram, Ranganatha

    2016-01-01

    Recently, studies have reported the use of Near Infrared Spectroscopy (NIRS) for developing Brain–Computer Interface (BCI) by applying online pattern classification of brain states from subject-specific fNIRS signals. The purpose of the present study was to develop and test a real-time method for subject-specific and subject-independent classification of multi-channel fNIRS signals using support-vector machines (SVM), so as to determine its feasibility as an online neurofeedback system. Towards this goal, we used left versus right hand movement execution and movement imagery as study paradigms in a series of experiments. In the first two experiments, activations in the motor cortex during movement execution and movement imagery were used to develop subject-dependent models that obtained high classification accuracies thereby indicating the robustness of our classification method. In the third experiment, a generalized classifier-model was developed from the first two experimental data, which was then applied for subject-independent neurofeedback training. Application of this method in new participants showed mean classification accuracy of 63% for movement imagery tasks and 80% for movement execution tasks. These results, and their corresponding offline analysis reported in this study demonstrate that SVM based real-time subject-independent classification of fNIRS signals is feasible. This method has important applications in the field of hemodynamic BCIs, and neuro-rehabilitation where patients can be trained to learn spatio-temporal patterns of healthy brain activity. PMID:27467528

  5. Classification of burn wounds using support vector machines

    NASA Astrophysics Data System (ADS)

    Acha, Begona; Serrano, Carmen; Palencia, Sergio; Murillo, Juan Jose

    2004-05-01

    The purpose of this work is to improve a previous method developed by the authors for the classification of burn wounds into their depths. The inputs of the system are color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. Our previous work consisted in segmenting the burn wound from the rest of the image and classifying the burn into its depth. In this paper we focus on the classification problem only. We already proposed to use a Fuzzy-ARTMAP neural network (NN). However, we may take advantage of new powerful classification tools such as Support Vector Machines (SVM). We apply the five-folded cross validation scheme to divide the database into training and validating sets. Then, we apply a feature selection method for each classifier, which will give us the set of features that yields the smallest classification error for each classifier. Features used to classify are first-order statistical parameters extracted from the L*, u* and v* color components of the image. The feature selection algorithms used are the Sequential Forward Selection (SFS) and the Sequential Backward Selection (SBS) methods. As data of the problem faced here are not linearly separable, the SVM was trained using some different kernels. The validating process shows that the SVM method, when using a Gaussian kernel of variance 1, outperforms classification results obtained with the rest of the classifiers, yielding an error classification rate of 0.7% whereas the Fuzzy-ARTMAP NN attained 1.6 %.

  6. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  7. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  8. Classification of neocortical interneurons using affinity propagation.

    PubMed

    Santana, Roberto; McGarry, Laura M; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael

    2013-01-01

    In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. In fact, neuronal classification is a difficult problem because it is unclear how to designate a neuronal cell class and what are the best characteristics to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological, or molecular characteristics, have provided quantitative and unbiased identification of distinct neuronal subtypes, when applied to selected datasets. However, better and more robust classification methods are needed for increasingly complex and larger datasets. Here, we explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. Affinity propagation outperformed Ward's method, a current standard clustering approach, in classifying the neurons into 4 subtypes. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.

  9. Fast and effective characterization of 3D region of interest in medical image data

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Megalooikonomou, Vasileios

    2004-05-01

    We propose a framework for detecting, characterizing and classifying spatial Regions of Interest (ROIs) in medical images, such as tumors and lesions in MRI or activation regions in fMRI. A necessary step prior to classification is efficient extraction of discriminative features. For this purpose, we apply a characterization technique especially designed for spatial ROIs. The main idea of this technique is to extract a k-dimensional feature vector using concentric spheres in 3D (or circles in 2D) radiating out of the ROI's center of mass. These vectors form characterization signatures that can be used to represent the initial ROIs. We focus on classifying fMRI ROIs obtained from a study that explores neuroanatomical correlates of semantic processing in Alzheimer's disease (AD). We detect a ROI highly associated with AD and apply the feature extraction technique with different experimental settings. We seek to distinguish control from patient samples. We study how classification can be performed using the extracted signatures as well as how different experimental parameters affect classification accuracy. The obtained classification accuracy ranged from 82% to 87% (based on the selected ROI) suggesting that the proposed classification framework can be potentially useful in supporting medical decision-making.

  10. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  11. HClass: Automatic classification tool for health pathologies using artificial intelligence techniques.

    PubMed

    Garcia-Chimeno, Yolanda; Garcia-Zapirain, Begonya

    2015-01-01

    The classification of subjects' pathologies enables a rigorousness to be applied to the treatment of certain pathologies, as doctors on occasions play with so many variables that they can end up confusing some illnesses with others. Thanks to Machine Learning techniques applied to a health-record database, it is possible to make using our algorithm. hClass contains a non-linear classification of either a supervised, non-supervised or semi-supervised type. The machine is configured using other techniques such as validation of the set to be classified (cross-validation), reduction in features (PCA) and committees for assessing the various classifiers. The tool is easy to use, and the sample matrix and features that one wishes to classify, the number of iterations and the subjects who are going to be used to train the machine all need to be introduced as inputs. As a result, the success rate is shown either via a classifier or via a committee if one has been formed. A 90% success rate is obtained in the ADABoost classifier and 89.7% in the case of a committee (comprising three classifiers) when PCA is applied. This tool can be expanded to allow the user to totally characterise the classifiers by adjusting them to each classification use.

  12. Master standard data quantity food production code. Macro elements for synthesizing production labor time.

    PubMed

    Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C

    1978-06-01

    Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.

  13. A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks

    PubMed Central

    Liang, Wei; Zhang, Yinlong; Tan, Jindong; Li, Yang

    2014-01-01

    This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient's ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS) filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs) are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC) in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN) platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen. PMID:24681668

  14. Activity recognition in planetary navigation field tests using classification algorithms applied to accelerometer data.

    PubMed

    Song, Wen; Ade, Carl; Broxterman, Ryan; Barstow, Thomas; Nelson, Thomas; Warren, Steve

    2012-01-01

    Accelerometer data provide useful information about subject activity in many different application scenarios. For this study, single-accelerometer data were acquired from subjects participating in field tests that mimic tasks that astronauts might encounter in reduced gravity environments. The primary goal of this effort was to apply classification algorithms that could identify these tasks based on features present in their corresponding accelerometer data, where the end goal is to establish methods to unobtrusively gauge subject well-being based on sensors that reside in their local environment. In this initial analysis, six different activities that involve leg movement are classified. The k-Nearest Neighbors (kNN) algorithm was found to be the most effective, with an overall classification success rate of 90.8%.

  15. An Ecological Framework of the Human Virome Provides Classification of Current Knowledge and Identifies Areas of Forthcoming Discovery

    PubMed Central

    Parker, Michael T.

    2016-01-01

    Recent advances in sequencing technologies have opened the door for the classification of the human virome. While taxonomic classification can be applied to the viruses identified in such studies, this gives no information as to the type of interaction the virus has with the host. As follow-up studies are performed to address these questions, the description of these virus-host interactions would be greatly enriched by applying a standard set of definitions that typify them. This paper describes a framework with which all members of the human virome can be classified based on principles of ecology. The scaffold not only enables categorization of the human virome, but can also inform research aimed at identifying novel virus-host interactions. PMID:27698618

  16. Meta-learning framework applied in bioinformatics inference system design.

    PubMed

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  17. A Critical Review of Mode of Action (MOA) Assignment Classifications for Ecotoxicology

    EPA Science Inventory

    There are various structure-based classification schemes to categorize chemicals based on mode of action (MOA) which have been applied for both eco and human health toxicology. With increasing calls to assess thousands of chemicals, some of which have little available informatio...

  18. 76 FR 54419 - International Anti-Fouling System Certificate

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-01

    ... society may issue on behalf of the Coast Guard. This action is being taken in response to recently enacted..., 2001. This proposed rule would enable recognized classification societies to apply to the Coast Guard... classification societies to issue international certificates to vessels. The United States currently recognizes...

  19. 75 FR 707 - Classified National Security Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... classified at one of the following three levels: (1) ``Top Secret'' shall be applied to information, the... exercise this authority. (2) ``Top Secret'' original classification authority may be delegated only by the... official has been delegated ``Top Secret'' original classification authority by the agency head. (4) Each...

  20. AgRISTARS: Supporting research. Classification of corn: Badhwar profile similarity technique. [us corn belt

    NASA Technical Reports Server (NTRS)

    Austin, W. W. (Principal Investigator)

    1981-01-01

    The same software programs used to classify spring wheat are applied to the classification of corn in 26 segments in the corn belt. Numerical results of the acreage estimation are given. Potential problem areas defined in an earlier application are examined.

  1. NIR technique in the classification of cotton leaf grade

    USDA-ARS?s Scientific Manuscript database

    Near infrared (NIR) spectroscopy, a useful technique due to the speed, ease of use, and adaptability to on-line or off-line implementation, has been applied to perform the qualitative classification and quantitative prediction of cotton quality characteristics, including trash index. One term to as...

  2. Classification of skin cancer images using local binary pattern and SVM classifier

    NASA Astrophysics Data System (ADS)

    Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra

    2016-11-01

    In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.

  3. New workflow for classification of genetic variants' pathogenicity applied to hereditary recurrent fevers by the International Study Group for Systemic Autoinflammatory Diseases (INSAID).

    PubMed

    Van Gijn, Marielle E; Ceccherini, Isabella; Shinar, Yael; Carbo, Ellen C; Slofstra, Mariska; Arostegui, Juan I; Sarrabay, Guillaume; Rowczenio, Dorota; Omoyımnı, Ebun; Balci-Peynircioglu, Banu; Hoffman, Hal M; Milhavet, Florian; Swertz, Morris A; Touitou, Isabelle

    2018-03-29

    Hereditary recurrent fevers (HRFs) are rare inflammatory diseases sharing similar clinical symptoms and effectively treated with anti-inflammatory biological drugs. Accurate diagnosis of HRF relies heavily on genetic testing. This study aimed to obtain an experts' consensus on the clinical significance of gene variants in four well-known HRF genes: MEFV , TNFRSF1A , NLRP3 and MVK . We configured a MOLGENIS web platform to share and analyse pathogenicity classifications of the variants and to manage a consensus-based classification process. Four experts in HRF genetics submitted independent classifications of 858 variants. Classifications were driven to consensus by recruiting four more expert opinions and by targeting discordant classifications in five iterative rounds. Consensus classification was reached for 804/858 variants (94%). None of the unsolved variants (6%) remained with opposite classifications (eg, pathogenic vs benign). New mutational hotspots were found in all genes. We noted a lower pathogenic variant load and a higher fraction of variants with unknown or unsolved clinical significance in the MEFV gene. Applying a consensus-driven process on the pathogenicity assessment of experts yielded rapid classification of almost all variants of four HRF genes. The high-throughput database will profoundly assist clinicians and geneticists in the diagnosis of HRFs. The configured MOLGENIS platform and consensus evolution protocol are usable for assembly of other variant pathogenicity databases. The MOLGENIS software is available for reuse at http://github.com/molgenis/molgenis; the specific HRF configuration is available at http://molgenis.org/said/. The HRF pathogenicity classifications will be published on the INFEVERS database at https://fmf.igh.cnrs.fr/ISSAID/infevers/. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Quantitation of flavonoid constituents in citrus fruits.

    PubMed

    Kawaii, S; Tomono, Y; Katase, E; Ogawa, K; Yano, M

    1999-09-01

    Twenty-four flavonoids have been determined in 66 Citrus species and near-citrus relatives, grown in the same field and year, by means of reversed phase high-performance liquid chromatography analysis. Statistical methods have been applied to find relations among the species. The F ratios of 21 flavonoids obtained by applying ANOVA analysis are significant, indicating that a classification of the species using these variables is reasonable to pursue. Principal component analysis revealed that the distributions of Citrus species belonging to different classes were largely in accordance with Tanaka's classification system.

  5. The use of the modified Cholesky decomposition in divergence and classification calculations

    NASA Technical Reports Server (NTRS)

    Vanroony, D. L.; Lynn, M. S.; Snyder, C. H.

    1973-01-01

    The use of the Cholesky decomposition technique is analyzed as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g. as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stablity problems are briefly discussed.

  6. The use of the modified Cholesky decomposition in divergence and classification calculations

    NASA Technical Reports Server (NTRS)

    Van Rooy, D. L.; Lynn, M. S.; Snyder, C. H.

    1973-01-01

    This report analyzes the use of the modified Cholesky decomposition technique as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g., as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stability problems are briefly discussed.

  7. Handling Imbalanced Data Sets in Multistage Classification

    NASA Astrophysics Data System (ADS)

    López, M.

    Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.

  8. An automatic taxonomy of galaxy morphology using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Hocking, Alex; Geach, James E.; Sun, Yi; Davey, Neil

    2018-01-01

    We present an unsupervised machine learning technique that automatically segments and labels galaxies in astronomical imaging surveys using only pixel data. Distinct from previous unsupervised machine learning approaches used in astronomy we use no pre-selection or pre-filtering of target galaxy type to identify galaxies that are similar. We demonstrate the technique on the Hubble Space Telescope (HST) Frontier Fields. By training the algorithm using galaxies from one field (Abell 2744) and applying the result to another (MACS 0416.1-2403), we show how the algorithm can cleanly separate early and late type galaxies without any form of pre-directed training for what an 'early' or 'late' type galaxy is. We then apply the technique to the HST Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) fields, creating a catalogue of approximately 60 000 classifications. We show how the automatic classification groups galaxies of similar morphological (and photometric) type and make the classifications public via a catalogue, a visual catalogue and galaxy similarity search. We compare the CANDELS machine-based classifications to human-classifications from the Galaxy Zoo: CANDELS project. Although there is not a direct mapping between Galaxy Zoo and our hierarchical labelling, we demonstrate a good level of concordance between human and machine classifications. Finally, we show how the technique can be used to identify rarer objects and present lensed galaxy candidates from the CANDELS imaging.

  9. Using RNA Sequencing to Classify Organisms into Three Primary Kingdoms.

    ERIC Educational Resources Information Center

    Evans, Robert H.

    1983-01-01

    Using the biochemical record to class archaebacteria, eukaryotes, and eubacteria involves abstractions difficult for the concrete learner. Therefore, a method is provided in which students discover some basic tenets of biochemical classification and apply them in a "hands-on" classification problem. The method involves use of RNA…

  10. Applying Descriptive Statistics to Teaching the Regional Classification of Climate.

    ERIC Educational Resources Information Center

    Lindquist, Peter S.; Hammel, Daniel J.

    1998-01-01

    Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…

  11. Mode of Action (MOA) Assignment Classifications for Ecotoxicology: Evaluation of Available Methods

    EPA Science Inventory

    There are various structure-based classification schemes to categorize chemicals based on mode of action (MOA) which have been applied for both eco and human toxicology. With increasing calls to assess 1000s of chemicals, some of which have little available information other tha...

  12. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    ERIC Educational Resources Information Center

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  13. 18 CFR 3a.23 - Review of classified material for declassification purposes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... classified material no longer warrants classification, it will be declassified and made available to the... available according to the declassification determination at the time of classification. During each... of the request for review no determination has been made, the requester may apply to the FPC Review...

  14. Validity: Applying Current Concepts and Standards to Gynecologic Surgery Performance Assessments

    ERIC Educational Resources Information Center

    LeClaire, Edgar L.; Nihira, Mikio A.; Hardré, Patricia L.

    2015-01-01

    Validity is critical for meaningful assessment of surgical competency. According to the Standards for Educational and Psychological Testing, validation involves the integration of data from well-defined classifications of evidence. In the authoritative framework, data from all classifications support construct validity claims. The two aims of this…

  15. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  16. Analyzing Student Inquiry Data Using Process Discovery and Sequence Classification

    ERIC Educational Resources Information Center

    Emond, Bruno; Buffett, Scott

    2015-01-01

    This paper reports on results of applying process discovery mining and sequence classification mining techniques to a data set of semi-structured learning activities. The main research objective is to advance educational data mining to model and support self-regulated learning in heterogeneous environments of learning content, activities, and…

  17. Cosmic variance in inflation with two light scalars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie

    We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always hasmore » the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.« less

  18. Source imaging of drums in the APNEA system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, D.

    1995-12-31

    The APNea System is a neutron assay device utilizing both a passive mode and a differential-dieaway active mode. The total detection efficiency is not spatially uniform, even for an empty chamber, and a drum matrix in the chamber can severely distort this response. In order to achieve a response which is independent of the way the source material is distributed in a drum, an imaging procedure has been developed which treats the drum as a number of virtual (sub)volumes. Since each virtual volume of source material is weighted with the appropriate instrument parameters (detection efficiency and thermal flux), the finalmore » assay result is essentially independent of the actual distribution of the source material throughout the drum and its matrix.« less

  19. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  20. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    PubMed Central

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  1. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    PubMed

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  2. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.

  3. Empirical Wavelet Transform Based Features for Classification of Parkinson's Disease Severity.

    PubMed

    Oung, Qi Wei; Muthusamy, Hariharan; Basah, Shafriza Nisha; Lee, Hoileong; Vijean, Vikneswaran

    2017-12-29

    Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.

  4. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    PubMed

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  5. Identification of an Efficient Gene Expression Panel for Glioblastoma Classification

    PubMed Central

    Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni

    2016-01-01

    We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170

  6. Contextual classification of multispectral image data: An unbiased estimator for the context distribution

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Swain, P. H. (Principal Investigator); Vardeman, S. B.

    1981-01-01

    A key input to a statistical classification algorithm, which exploits the tendency of certain ground cover classes to occur more frequently in some spatial context than in others, is a statistical characterization of the context: the context distribution. An unbiased estimator of the context distribution is discussed which, besides having the advantage of statistical unbiasedness, has the additional advantage over other estimation techniques of being amenable to an adaptive implementation in which the context distribution estimate varies according to local contextual information. Results from applying the unbiased estimator to the contextual classification of three real LANDSAT data sets are presented and contrasted with results from non-contextual classifications and from contextual classifications utilizing other context distribution estimation techniques.

  7. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  8. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    PubMed Central

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.

  9. Korean coastal water depth/sediment and land cover mapping (1:25,000) by computer analysis of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Park, K. Y.; Miller, L. D.

    1978-01-01

    Computer analysis was applied to single date LANDSAT MSS imagery of a sample coastal area near Seoul, Korea equivalent to a 1:50,000 topographic map. Supervised image processing yielded a test classification map from this sample image containing 12 classes: 5 water depth/sediment classes, 2 shoreline/tidal classes, and 5 coastal land cover classes at a scale of 1:25,000 and with a training set accuracy of 76%. Unsupervised image classification was applied to a subportion of the site analyzed and produced classification maps comparable in results in a spatial sense. The results of this test indicated that it is feasible to produce such quantitative maps for detailed study of dynamic coastal processes given a LANDSAT image data base at sufficiently frequent time intervals.

  10. Classification and recognition of texture collagen obtaining by multiphoton microscope with neural network analysis

    NASA Astrophysics Data System (ADS)

    Wu, Shulian; Peng, Yuanyuan; Hu, Liangjun; Zhang, Xiaoman; Li, Hui

    2016-01-01

    Second harmonic generation microscopy (SHGM) was used to monitor the process of chronological aging skin in vivo. The collagen structures of mice model with different ages were obtained using SHGM. Then, texture feature with contrast, correlation and entropy were extracted and analysed using the grey level co-occurrence matrix. At last, the neural network tool of Matlab was applied to train the texture of collagen in different statues during the aging process. And the simulation of mice collagen texture was carried out. The results indicated that the classification accuracy reach 85%. Results demonstrated that the proposed approach effectively detected the target object in the collagen texture image during the chronological aging process and the analysis tool based on neural network applied the skin of classification and feature extraction method is feasible.

  11. Managing the Big Data Avalanche in Astronomy - Data Mining the Galaxy Zoo Classification Database

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.

    2014-01-01

    We will summarize a variety of data mining experiments that have been applied to the Galaxy Zoo database of galaxy classifications, which were provided by the volunteer citizen scientists. The goal of these exercises is to learn new and improved classification rules for diverse populations of galaxies, which can then be applied to much larger sky surveys of the future, such as the LSST (Large Synoptic Sky Survey), which is proposed to obtain detailed photometric data for approximately 20 billion galaxies. The massive Big Data that astronomy projects will generate in the future demand greater application of data mining and data science algorithms, as well as greater training of astronomy students in the skills of data mining and data science. The project described here has involved several graduate and undergraduate research assistants at George Mason University.

  12. Please Don't Move-Evaluating Motion Artifact From Peripheral Quantitative Computed Tomography Scans Using Textural Features.

    PubMed

    Rantalainen, Timo; Chivers, Paola; Beck, Belinda R; Robertson, Sam; Hart, Nicolas H; Nimphius, Sophia; Weeks, Benjamin K; McIntyre, Fleur; Hands, Beth; Siafarikas, Aris

    Most imaging methods, including peripheral quantitative computed tomography (pQCT), are susceptible to motion artifacts particularly in fidgety pediatric populations. Methods currently used to address motion artifact include manual screening (visual inspection) and objective assessments of the scans. However, previously reported objective methods either cannot be applied on the reconstructed image or have not been tested for distal bone sites. Therefore, the purpose of the present study was to develop and validate motion artifact classifiers to quantify motion artifact in pQCT scans. Whether textural features could provide adequate motion artifact classification performance in 2 adolescent datasets with pQCT scans from tibial and radial diaphyses and epiphyses was tested. The first dataset was split into training (66% of sample) and validation (33% of sample) datasets. Visual classification was used as the ground truth. Moderate to substantial classification performance (J48 classifier, kappa coefficients from 0.57 to 0.80) was observed in the validation dataset with the novel texture-based classifier. In applying the same classifier to the second cross-sectional dataset, a slight-to-fair (κ = 0.01-0.39) classification performance was observed. Overall, this novel textural analysis-based classifier provided a moderate-to-substantial classification of motion artifact when the classifier was specifically trained for the measurement device and population. Classification based on textural features may be used to prescreen obviously acceptable and unacceptable scans, with a subsequent human-operated visual classification of any remaining scans. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  13. Feature selection for the classification of traced neurons.

    PubMed

    López-Cabrera, José D; Lorenzo-Ginori, Juan V

    2018-06-01

    The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Association between pathology and texture features of multi parametric MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Kuess, Peter; Andrzejewski, Piotr; Nilsson, David; Georg, Petra; Knoth, Johannes; Susani, Martin; Trygg, Johan; Helbich, Thomas H.; Polanec, Stephan H.; Georg, Dietmar; Nyholm, Tufve

    2017-10-01

    The role of multi-parametric (mp)MRI in the diagnosis and treatment of prostate cancer has increased considerably. An alternative to visual inspection of mpMRI is the evaluation using histogram-based (first order statistics) parameters and textural features (second order statistics). The aims of the present work were to investigate the relationship between benign and malignant sub-volumes of the prostate and textures obtained from mpMR images. The performance of tumor prediction was investigated based on the combination of histogram-based and textural parameters. Subsequently, the relative importance of mpMR images was assessed and the benefit of additional imaging analyzed. Finally, sub-structures based on the PI-RADS classification were investigated as potential regions to automatically detect maligned lesions. Twenty-five patients who received mpMRI prior to radical prostatectomy were included in the study. The imaging protocol included T2, DWI, and DCE. Delineation of tumor regions was performed based on pathological information. First and second order statistics were derived from each structure and for all image modalities. The resulting data were processed with multivariate analysis, using PCA (principal component analysis) and OPLS-DA (orthogonal partial least squares discriminant analysis) for separation of malignant and healthy tissue. PCA showed a clear difference between tumor and healthy regions in the peripheral zone for all investigated images. The predictive ability of the OPLS-DA models increased for all image modalities when first and second order statistics were combined. The predictive value reached a plateau after adding ADC and T2, and did not increase further with the addition of other image information. The present study indicates a distinct difference in the signatures between malign and benign prostate tissue. This is an absolute prerequisite for automatic tumor segmentation, but only the first step in that direction. For the specific identified signature, DCE did not add complementary information to T2 and ADC maps.

  15. LANDSAT landcover information applied to regional planning decisions. [Prince Edward County, Virginia

    NASA Technical Reports Server (NTRS)

    Dixon, C. M.

    1981-01-01

    Land cover information derived from LANDSAT is being utilized by Piedmont Planning District Commission located in the State of Virginia. Progress to date is reported on a level one land cover classification map being produced with nine categories. The nine categories of classification are defined. The computer compatible tape selection is presented. Two unsupervised classifications were done, with 50 and 70 classes respectively. Twenty-eight spectral classes were developed using the supervised technique, employing actual ground truth training sites. The accuracy of the unsupervised classifications are estimated through comparison with local county statistics and with an actual pixel count of LANDSAT information compared to ground truth.

  16. Theory and analysis of statistical discriminant techniques as applied to remote sensing data

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1973-01-01

    Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.

  17. Using Computational Text Classification for Qualitative Research and Evaluation in Extension

    ERIC Educational Resources Information Center

    Smith, Justin G.; Tissing, Reid

    2018-01-01

    This article introduces a process for computational text classification that can be used in a variety of qualitative research and evaluation settings. The process leverages supervised machine learning based on an implementation of a multinomial Bayesian classifier. Applied to a community of inquiry framework, the algorithm was used to identify…

  18. Engaged with Carnegie: Effects of Carnegie Classification Recognition on CUMU Universities

    ERIC Educational Resources Information Center

    Arfken, Deborah Elwell; Ritz, Susan

    2013-01-01

    This paper provides the results of a survey sent to all thirty-two CUMU institutions that have received the Carnegie recognition and specifically examines a) reasons for applying for the elective classification; b) level of pride instilled in campuses; and c) level of impact on institutional identity and culture, institutional commitment,…

  19. Constructing the 'Transitional Problem' for Young Disabled People Leaving School: Comparing Policy and Practice in Ontario and Scotland.

    ERIC Educational Resources Information Center

    Tisdall, E. K. M.

    1997-01-01

    Explores how the "transitional question" of young disabled people leaving school is constructed. Describes and evaluates D. L. Kirp's classification system of social problems as applied to the "transition question." Reports case studies from Ontario (Canada) and Scotland. Concludes that Kirp's classification system is not…

  20. Abnormal Uterine Bleeding: Current Classification and Clinical Management.

    PubMed

    Bacon, Janice L

    2017-06-01

    Abnormal uterine bleeding is now classified and categorized according to the International Federation of Gynecology and Obstetrics classification system: PALM-COEIN. This applies to nongravid women during their reproductive years and allows more clear designation of causes, thus aiding clinical care and future research. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Passive polarimetric imagery-based material classification robust to illumination source position and viewpoint.

    PubMed

    Thilak Krishna, Thilakam Vimal; Creusere, Charles D; Voelz, David G

    2011-01-01

    Polarization, a property of light that conveys information about the transverse electric field orientation, complements other attributes of electromagnetic radiation such as intensity and frequency. Using multiple passive polarimetric images, we develop an iterative, model-based approach to estimate the complex index of refraction and apply it to target classification.

  2. A Classification of Recent Australasian Computing Education Publications

    ERIC Educational Resources Information Center

    Computer Science Education, 2007

    2007-01-01

    A new classification system for computing education papers is presented and applied to every computing education paper published between January 2004 and January 2007 at the two premier computing education conferences in Australia and New Zealand. We find that while simple reports outnumber other types of paper, a healthy proportion of papers…

  3. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    ERIC Educational Resources Information Center

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  4. 8 CFR 204.300 - Scope of this subpart.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... apply to the immigrant visa classification of adopted children, as defined in section 101(b)(1)(E) of the Act. For the procedures that govern classification of adopted children as defined in section 101(b... date, as defined in 8 CFR 204.301. (b) Orphan cases. On or after the Convention effective date, no Form...

  5. Diagnostic Classification Models: Thoughts and Future Directions

    ERIC Educational Resources Information Center

    Henson, Robert A.

    2009-01-01

    The paper by Drs. Rupp and Templin provides a much needed step toward the general application of diagnostic classification modeling (DCMs). The authors have provided a summary of many of the concepts that one must consider to properly apply a DCM (which ranges from model selection and estimation, to assessing the appropriateness of the model using…

  6. 76 FR 6551 - Medical Devices; General and Plastic Surgery Devices; Classification of Contact Cooling System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-07

    ... intended for non-invasive aesthetic use will need to address the issues covered in the special controls... intended for non-invasive aesthetic use. (b) Classification. Class II (special controls). The special... into class II (special controls). The special control that will apply to the device is the guidance...

  7. Machine learning algorithms for meteorological event classification in the coastal area using in-situ data

    NASA Astrophysics Data System (ADS)

    Sokolov, Anton; Gengembre, Cyril; Dmitriev, Egor; Delbarre, Hervé

    2017-04-01

    The problem is considered of classification of local atmospheric meteorological events in the coastal area such as sea breezes, fogs and storms. The in-situ meteorological data as wind speed and direction, temperature, humidity and turbulence are used as predictors. Local atmospheric events of 2013-2014 were analysed manually to train classification algorithms in the coastal area of English Channel in Dunkirk (France). Then, ultrasonic anemometer data and LIDAR wind profiler data were used as predictors. A few algorithms were applied to determine meteorological events by local data such as a decision tree, the nearest neighbour classifier, a support vector machine. The comparison of classification algorithms was carried out, the most important predictors for each event type were determined. It was shown that in more than 80 percent of the cases machine learning algorithms detect the meteorological class correctly. We expect that this methodology could be applied also to classify events by climatological in-situ data or by modelling data. It allows estimating frequencies of each event in perspective of climate change.

  8. Classifier dependent feature preprocessing methods

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M., II; Peterson, Gilbert L.

    2008-04-01

    In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.

  9. Modeling EEG Waveforms with Semi-Supervised Deep Belief Nets: Fast Classification and Anomaly Measurement

    PubMed Central

    Wulsin, D. F.; Gupta, J. R.; Mani, R.; Blanco, J. A.; Litt, B.

    2011-01-01

    Clinical electroencephalography (EEG) records vast amounts of human complex data yet is still reviewed primarily by human readers. Deep Belief Nets (DBNs) are a relatively new type of multi-layer neural network commonly tested on two-dimensional image data, but are rarely applied to times-series data such as EEG. We apply DBNs in a semi-supervised paradigm to model EEG waveforms for classification and anomaly detection. DBN performance was comparable to standard classifiers on our EEG dataset, and classification time was found to be 1.7 to 103.7 times faster than the other high-performing classifiers. We demonstrate how the unsupervised step of DBN learning produces an autoencoder that can naturally be used in anomaly measurement. We compare the use of raw, unprocessed data—a rarity in automated physiological waveform analysis—to hand-chosen features and find that raw data produces comparable classification and better anomaly measurement performance. These results indicate that DBNs and raw data inputs may be more effective for online automated EEG waveform recognition than other common techniques. PMID:21525569

  10. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  11. Automated speech analysis applied to laryngeal disease categorization.

    PubMed

    Gelzinis, A; Verikas, A; Bacauskiene, M

    2008-07-01

    The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.

  12. Scattering property based contextual PolSAR speckle filter

    NASA Astrophysics Data System (ADS)

    Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred

    2017-12-01

    Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.

  13. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  14. Forest tree species discrimination in western Himalaya using EO-1 Hyperion

    NASA Astrophysics Data System (ADS)

    George, Rajee; Padalia, Hitendra; Kushwaha, S. P. S.

    2014-05-01

    The information acquired in the narrow bands of hyperspectral remote sensing data has potential to capture plant species spectral variability, thereby improving forest tree species mapping. This study assessed the utility of spaceborne EO-1 Hyperion data in discrimination and classification of broadleaved evergreen and conifer forest tree species in western Himalaya. The pre-processing of 242 bands of Hyperion data resulted into 160 noise-free and vertical stripe corrected reflectance bands. Of these, 29 bands were selected through step-wise exclusion of bands (Wilk's Lambda). Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) algorithms were applied to the selected bands to assess their effectiveness in classification. SVM was also applied to broadband data (Landsat TM) to compare the variation in classification accuracy. All commonly occurring six gregarious tree species, viz., white oak, brown oak, chir pine, blue pine, cedar and fir in western Himalaya could be effectively discriminated. SVM produced a better species classification (overall accuracy 82.27%, kappa statistic 0.79) than SAM (overall accuracy 74.68%, kappa statistic 0.70). It was noticed that classification accuracy achieved with Hyperion bands was significantly higher than Landsat TM bands (overall accuracy 69.62%, kappa statistic 0.65). Study demonstrated the potential utility of narrow spectral bands of Hyperion data in discriminating tree species in a hilly terrain.

  15. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  16. Comparison of World Health Organization and Asia-Pacific body mass index classifications in COPD patients.

    PubMed

    Lim, Jeong Uk; Lee, Jae Ha; Kim, Ju Sang; Hwang, Yong Il; Kim, Tae-Hyung; Lim, Seong Yong; Yoo, Kwang Ha; Jung, Ki-Suck; Kim, Young Kyoon; Rhee, Chin Kook

    2017-01-01

    A low body mass index (BMI) is associated with increased mortality and low health-related quality of life in patients with COPD. The Asia-Pacific classification of BMI has a lower cutoff for overweight and obese categories compared to the World Health Organization (WHO) classification. The present study assessed patients with COPD among different BMI categories according to two BMI classification systems: WHO and Asia-Pacific. Patients with COPD aged 40 years or older from the Korean COPD Subtype Study cohort were selected for evaluation. We enrolled 1,462 patients. Medical history including age, sex, St George's Respiratory Questionnaire (SGRQ-C), the modified Medical Research Council (mMRC) dyspnea scale, and post-bronchodilator forced expiratory volume in 1 second (FEV 1 ) were evaluated. Patients were categorized into different BMI groups according to the two BMI classification systems. FEV 1 and the diffusing capacity of the lung for carbon monoxide (DLCO) percentage revealed an inverse "U"-shaped pattern as the BMI groups changed from underweight to obese when WHO cutoffs were applied. When Asia-Pacific cutoffs were applied, FEV 1 and DLCO (%) exhibited a linearly ascending relationship as the BMI increased, and the percentage of patients in the overweight and obese groups linearly decreased with increasing severity of the Global Initiative for Chronic Obstructive Lung Disease criteria. From the underweight to the overweight groups, SGRQ-C and mMRC had a decreasing relationship in both the WHO and Asia-Pacific classifications. The prevalence of comorbidities in the different BMI groups showed similar trends in both BMI classifications systems. The present study demonstrated that patients with COPD who have a high BMI have better pulmonary function and health-related quality of life and reduced dyspnea symptoms. Furthermore, the Asia-Pacific BMI classification more appropriately reflects the correlation of obesity and disease manifestation in Asian COPD patients than the WHO classification.

  17. A generalized parametric response mapping method for analysis of multi-parametric imaging: A feasibility study with application to glioblastoma.

    PubMed

    Lausch, Anthony; Yeung, Timothy Pok-Chi; Chen, Jeff; Law, Elton; Wang, Yong; Urbini, Benedetta; Donelli, Filippo; Manco, Luigi; Fainardi, Enrico; Lee, Ting-Yim; Wong, Eugene

    2017-11-01

    Parametric response map (PRM) analysis of functional imaging has been shown to be an effective tool for early prediction of cancer treatment outcomes and may also be well-suited toward guiding personalized adaptive radiotherapy (RT) strategies such as sub-volume boosting. However, the PRM method was primarily designed for analysis of longitudinally acquired pairs of single-parameter image data. The purpose of this study was to demonstrate the feasibility of a generalized parametric response map analysis framework, which enables analysis of multi-parametric data while maintaining the key advantages of the original PRM method. MRI-derived apparent diffusion coefficient (ADC) and relative cerebral blood volume (rCBV) maps acquired at 1 and 3-months post-RT for 19 patients with high-grade glioma were used to demonstrate the algorithm. Images were first co-registered and then standardized using normal tissue image intensity values. Tumor voxels were then plotted in a four-dimensional Cartesian space with coordinate values equal to a voxel's image intensity in each of the image volumes and an origin defined as the multi-parametric mean of normal tissue image intensity values. Voxel positions were orthogonally projected onto a line defined by the origin and a pre-determined response vector. The voxels are subsequently classified as positive, negative or nil, according to whether projected positions along the response vector exceeded a threshold distance from the origin. The response vector was selected by identifying the direction in which the standard deviation of tumor image intensity values was maximally different between responding and non-responding patients within a training dataset. Voxel classifications were visualized via familiar three-class response maps and then the fraction of tumor voxels associated with each of the classes was investigated for predictive utility analogous to the original PRM method. Independent PRM and MPRM analyses of the contrast-enhancing lesion (CEL) and a 1 cm shell of surrounding peri-tumoral tissue were performed. Prediction using tumor volume metrics was also investigated. Leave-one-out cross validation (LOOCV) was used in combination with permutation testing to assess preliminary predictive efficacy and estimate statistically robust P-values. The predictive endpoint was overall survival (OS) greater than or equal to the median OS of 18.2 months. Single-parameter PRM and multi-parametric response maps (MPRMs) were generated for each patient and used to predict OS via the LOOCV. Tumor volume metrics (P ≥ 0.071 ± 0.01) and single-parameter PRM analyses (P ≥ 0.170 ± 0.01) were not found to be predictive of OS within this study. MPRM analysis of the peri-tumoral region but not the CEL was found to be predictive of OS with a classification sensitivity, specificity and accuracy of 80%, 100%, and 89%, respectively (P = 0.001 ± 0.01). The feasibility of a generalized MPRM analysis framework was demonstrated with improved prediction of overall survival compared to the original single-parameter method when applied to a glioblastoma dataset. The proposed algorithm takes the spatial heterogeneity in multi-parametric response into consideration and enables visualization. MPRM analysis of peri-tumoral regions was shown to have predictive potential supporting further investigation of a larger glioblastoma dataset. © 2017 American Association of Physicists in Medicine.

  18. [Management of chemical products and European standards: new classification criteria according to the 1272/2008 (CLP) regulation].

    PubMed

    Fanghella, Paola Di Prospero; Aliberti, Ludovica Malaguti

    2013-01-01

    The European Union adopted regulations (EC) 1907/2006 REACH e (EC)1272/2008 CLP, to manage chemicals. REACH requires for evaluation and management of risks connected to the use of chemical substances, while o CLP provides for the classification, labelling and packagings of dangerous substances and mixtures by implementing in the EU the UN Globally Harmonised System of Classification and Labelling applying the building block approach, that is taking on board the hazard classes and categories which are close to the existing EU system in order to maintain the level of protection of human health and environment. This regulation provides also for the notification of the classification and labelling of substances to the Classification & Labelling Inventory established by the European Chemicals Agency (ECHA). Some european downstream regulations making reference to the classification criteria, as the health and safety laws at workplace, need to be adapted to these regulations.

  19. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  20. Significance of clustering and classification applications in digital and physical libraries

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Ioannis; Koulouris, Alexandros; Zervos, Spiros; Dendrinos, Markos; Giannakopoulos, Georgios

    2015-02-01

    Applications of clustering and classification techniques can be proved very significant in both digital and physical (paper-based) libraries. The most essential application, document classification and clustering, is crucial for the content that is produced and maintained in digital libraries, repositories, databases, social media, blogs etc., based on various tags and ontology elements, transcending the traditional library-oriented classification schemes. Other applications with very useful and beneficial role in the new digital library environment involve document routing, summarization and query expansion. Paper-based libraries can benefit as well since classification combined with advanced material characterization techniques such as FTIR (Fourier Transform InfraRed spectroscopy) can be vital for the study and prevention of material deterioration. An improved two-level self-organizing clustering architecture is proposed in order to enhance the discrimination capacity of the learning space, prior to classification, yielding promising results when applied to the above mentioned library tasks.

  1. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  2. Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)

    NASA Astrophysics Data System (ADS)

    Tazeh, mahdi; Kalantari, Saeideh

    2013-04-01

    This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.

  3. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    PubMed

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  4. Derivation of an artificial gene to improve classification accuracy upon gene selection.

    PubMed

    Seo, Minseok; Oh, Sejong

    2012-02-01

    Classification analysis has been developed continuously since 1936. This research field has advanced as a result of development of classifiers such as KNN, ANN, and SVM, as well as through data preprocessing areas. Feature (gene) selection is required for very high dimensional data such as microarray before classification work. The goal of feature selection is to choose a subset of informative features that reduces processing time and provides higher classification accuracy. In this study, we devised a method of artificial gene making (AGM) for microarray data to improve classification accuracy. Our artificial gene was derived from a whole microarray dataset, and combined with a result of gene selection for classification analysis. We experimentally confirmed a clear improvement of classification accuracy after inserting artificial gene. Our artificial gene worked well for popular feature (gene) selection algorithms and classifiers. The proposed approach can be applied to any type of high dimensional dataset. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Molecular approaches for classifying endometrial carcinoma.

    PubMed

    Piulats, Josep M; Guerra, Esther; Gil-Martín, Marta; Roman-Canal, Berta; Gatius, Sonia; Sanz-Pamplona, Rebeca; Velasco, Ana; Vidal, August; Matias-Guiu, Xavier

    2017-04-01

    Endometrial carcinoma is the most common cancer of the female genital tract. This review article discusses the usefulness of molecular techniques to classify endometrial carcinoma. Any proposal for molecular classification of neoplasms should integrate morphological features of the tumors. For that reason, we start with the current histological classification of endometrial carcinoma, by discussing the correlation between genotype and phenotype, and the most significant recent improvements. Then, we comment on some of the possible flaws of this classification, by discussing also the value of molecular pathology in improving them, including interobserver variation in pathologic interpretation of high grade tumors. Third, we discuss the importance of applying TCGA molecular approach to clinical practice. We also comment on the impact of intratumor heterogeneity in classification, and finally, we will discuss briefly, the usefulness of TCGA classification in tailoring immunotherapy in endometrial cancer patients. We suggest combining pathologic classification and the surrogate TCGA molecular classification for high-grade endometrial carcinomas, as an option to improve assessment of prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Pattern classification of kinematic and kinetic running data to distinguish gender, shod/barefoot and injury groups with feature ranking.

    PubMed

    Eskofier, Bjoern M; Kraus, Martin; Worobets, Jay T; Stefanyshyn, Darren J; Nigg, Benno M

    2012-01-01

    The identification of differences between groups is often important in biomechanics. This paper presents group classification tasks using kinetic and kinematic data from a prospective running injury study. Groups composed of gender, of shod/barefoot running and of runners who developed patellofemoral pain syndrome (PFPS) during the study, and asymptotic runners were classified. The features computed from the biomechanical data were deliberately chosen to be generic. Therefore, they were suited for different biomechanical measurements and classification tasks without adaptation to the input signals. Feature ranking was applied to reveal the relevance of each feature to the classification task. Data from 80 runners were analysed for gender and shod/barefoot classification, while 12 runners were investigated in the injury classification task. Gender groups could be differentiated with 84.7%, shod/barefoot running with 98.3%, and PFPS with 100% classification rate. For the latter group, one single variable could be identified that alone allowed discrimination.

  7. D Object Classification Based on Thermal and Visible Imagery in Urban Area

    NASA Astrophysics Data System (ADS)

    Hasani, H.; Samadzadegan, F.

    2015-12-01

    The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.

  8. Applying Classification Trees to Hospital Administrative Data to Identify Patients with Lower Gastrointestinal Bleeding

    PubMed Central

    Siddique, Juned; Ruhnke, Gregory W.; Flores, Andrea; Prochaska, Micah T.; Paesch, Elizabeth; Meltzer, David O.; Whelan, Chad T.

    2015-01-01

    Background Lower gastrointestinal bleeding (LGIB) is a common cause of acute hospitalization. Currently, there is no accepted standard for identifying patients with LGIB in hospital administrative data. The objective of this study was to develop and validate a set of classification algorithms that use hospital administrative data to identify LGIB. Methods Our sample consists of patients admitted between July 1, 2001 and June 30, 2003 (derivation cohort) and July 1, 2003 and June 30, 2005 (validation cohort) to the general medicine inpatient service of the University of Chicago Hospital, a large urban academic medical center. Confirmed cases of LGIB in both cohorts were determined by reviewing the charts of those patients who had at least 1 of 36 principal or secondary International Classification of Diseases, Ninth revision, Clinical Modification (ICD-9-CM) diagnosis codes associated with LGIB. Classification trees were used on the data of the derivation cohort to develop a set of decision rules for identifying patients with LGIB. These rules were then applied to the validation cohort to assess their performance. Results Three classification algorithms were identified and validated: a high specificity rule with 80.1% sensitivity and 95.8% specificity, a rule that balances sensitivity and specificity (87.8% sensitivity, 90.9% specificity), and a high sensitivity rule with 100% sensitivity and 91.0% specificity. Conclusion These classification algorithms can be used in future studies to evaluate resource utilization and assess outcomes associated with LGIB without the use of chart review. PMID:26406318

  9. Deep Recurrent Neural Networks for Supernovae Classification

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  10. A Comparison Between a Synthetic Over-Sampling Equilibrium and Observed Subsets of Data for Epidemic Vector Classification

    NASA Astrophysics Data System (ADS)

    Fusco, Terence; Bi, Yaxin; Nugent, Chris; Wu, Shengli

    2016-08-01

    We can see that the data imputation approach using the Regression CTA has performed more favourably when compared with the alternative methods on this dataset. We now have the evidence to show that this method is viable moving forward with further research in this area. The weighted distribution experiments have provided us with a more balanced and appropriate ratio for snail density classification purposes when using either the 3 or 5 category combination. The most desirable results are found when using 3 categories of SD with the weighted distribution of classes being 20-60-20. This information reflects the optimum classification accuracy across the data range and can be applied to any novel environment feature dataset pertaining to Schistosomiasis vector classification. ITSVM has provided us with a method of labelling SD data which we can use for classification with epidemic disease prediction research. The confidence level selection enables consistent labelling accuracy for bespoke requirements when classifying the data from each year. The SMOTE Equilibrium proposed method has yielded a slight increase with each multiple of synthetic instances that are compounded to the training dataset. The reduction of overfitting and increase of data instances has shown a gradual classification accuracy increase across the data for each year. We will now test to see what the optimum synthetic instance incremental increase is across our data and apply this to our experiments with this research.

  11. Fast-HPLC Fingerprinting to Discriminate Olive Oil from Other Edible Vegetable Oils by Multivariate Classification Methods.

    PubMed

    Jiménez-Carvelo, Ana M; González-Casado, Antonio; Pérez-Castaño, Estefanía; Cuadros-Rodríguez, Luis

    2017-03-01

    A new analytical method for the differentiation of olive oil from other vegetable oils using reversed-phase LC and applying chemometric techniques was developed. A 3 cm short column was used to obtain the chromatographic fingerprint of the methyl-transesterified fraction of each vegetable oil. The chromatographic analysis took only 4 min. The multivariate classification methods used were k-nearest neighbors, partial least-squares (PLS) discriminant analysis, one-class PLS, support vector machine classification, and soft independent modeling of class analogies. The discrimination of olive oil from other vegetable edible oils was evaluated by several classification quality metrics. Several strategies for the classification of the olive oil were used: one input-class, two input-class, and pseudo two input-class.

  12. Identification of terrain cover using the optimum polarimetric classifier

    NASA Technical Reports Server (NTRS)

    Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.

  13. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  14. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  15. Weakly supervised classification in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  16. Hyperspectral feature mapping classification based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli

    2016-03-01

    This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.

  17. Determining Representative Elementary Volume For Multiple Petrophysical Parameters using a Convex Hull Analysis of Digital Rock Data

    NASA Astrophysics Data System (ADS)

    Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.

    2016-12-01

    Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.

  18. Centrifugal multiplexing fixed-volume dispenser on a plastic lab-on-a-disk for parallel biochemical single-end-point assays

    PubMed Central

    La, Moonwoo; Park, Sang Min; Kim, Dong Sung

    2015-01-01

    In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516

  19. Cloud cover determination in polar regions from satellite imagery

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Maslanik, J. A.; Key, J. R.

    1987-01-01

    A definition is undertaken of the spectral and spatial characteristics of clouds and surface conditions in the polar regions, and to the creation of calibrated, geometrically correct data sets suitable for quantitative analysis. Ways are explored in which this information can be applied to cloud classifications as new methods or as extensions to existing classification schemes. A methodology is developed that uses automated techniques to merge Advanced Very High Resolution Radiometer (AVHRR) and Scanning Multichannel Microwave Radiometer (SMMR) data, and to apply first-order calibration and zenith angle corrections to the AVHRR imagery. Cloud cover and surface types are manually interpreted, and manual methods are used to define relatively pure training areas to describe the textural and multispectral characteristics of clouds over several surface conditions. The effects of viewing angle and bidirectional reflectance differences are studied for several classes, and the effectiveness of some key components of existing classification schemes is tested.

  20. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  1. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  2. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  3. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  4. Applying the Methodology of the Community College Classification Scheme to the Public Master's Colleges and Universities Sector

    ERIC Educational Resources Information Center

    Kinkead, J. Clint.; Katsinas, Stephen G.

    2011-01-01

    This work brings forward the geographically-based classification scheme for the public Master's Colleges and Universities sector. Using the same methodology developed by Katsinas and Hardy (2005) to classify community colleges, this work classifies Master's Colleges and Universities. This work has four major findings and conclusions. First, a…

  5. Combining Unsupervised and Supervised Classification to Build User Models for Exploratory Learning Environments

    ERIC Educational Resources Information Center

    Amershi, Saleema; Conati, Cristina

    2009-01-01

    In this paper, we present a data-based user modeling framework that uses both unsupervised and supervised classification to build student models for exploratory learning environments. We apply the framework to build student models for two different learning environments and using two different data sources (logged interface and eye-tracking data).…

  6. 75 FR 5101 - Agency Information Collection Activities: Form I-590, Extension of a Currently Approved...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-01

    ... Classification as Refugee; OMB Control No. 1615- 0068. The Department of Homeland Security, U.S. Citizenship and... information collection. (2) Title of the Form/Collection: Registration for Classification as Refugee. (3... Households. Form I- 590 provides a uniform method for applicants to apply for refugee status and contains the...

  7. An Introduction to Recursive Partitioning: Rationale, Application, and Characteristics of Classification and Regression Trees, Bagging, and Random Forests

    ERIC Educational Resources Information Center

    Strobl, Carolin; Malley, James; Tutz, Gerhard

    2009-01-01

    Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and…

  8. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  9. 75 FR 45052 - The Jurisdictional Scope of Commodity Classification Determinations and Advisory Opinions Issued...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-02

    ... the authority to issue determinations about the ECCN that applies to an item. Because BIS assigns all... Control Classification Number (ECCN) down to the paragraph (or subparagraph) level, if appropriate. BIS... described by an ECCN in the Commerce Control List (CCL) in Supplement No. 1 to Part 774 of the EAR or not...

  10. Classification accuracy for stratification with remotely sensed data

    Treesearch

    Raymond L. Czaplewski; Paul L. Patterson

    2003-01-01

    Tools are developed that help specify the classification accuracy required from remotely sensed data. These tools are applied during the planning stage of a sample survey that will use poststratification, prestratification with proportional allocation, or double sampling for stratification. Accuracy standards are developed in terms of an “error matrix,” which is...

  11. Exploring diversity in ensemble classification: Applications in large area land cover mapping

    NASA Astrophysics Data System (ADS)

    Mellor, Andrew; Boukir, Samia

    2017-07-01

    Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area remote sensing applications, for which training data is costly and resource intensive to collect.

  12. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  13. Comparison of EEG-Features and Classification Methods for Motor Imagery in Patients with Disorders of Consciousness

    PubMed Central

    Höller, Yvonne; Bergmann, Jürgen; Thomschewski, Aljoscha; Kronbichler, Martin; Höller, Peter; Crone, Julia S.; Schmid, Elisabeth V.; Butz, Kevin; Nardone, Raffaele; Trinka, Eugen

    2013-01-01

    Current research aims at identifying voluntary brain activation in patients who are behaviorally diagnosed as being unconscious, but are able to perform commands by modulating their brain activity patterns. This involves machine learning techniques and feature extraction methods such as applied in brain computer interfaces. In this study, we try to answer the question if features/classification methods which show advantages in healthy participants are also accurate when applied to data of patients with disorders of consciousness. A sample of healthy participants (N = 22), patients in a minimally conscious state (MCS; N = 5), and with unresponsive wakefulness syndrome (UWS; N = 9) was examined with a motor imagery task which involved imagery of moving both hands and an instruction to hold both hands firm. We extracted a set of 20 features from the electroencephalogram and used linear discriminant analysis, k-nearest neighbor classification, and support vector machines (SVM) as classification methods. In healthy participants, the best classification accuracies were seen with coherences (mean = .79; range = .53−.94) and power spectra (mean = .69; range = .40−.85). The coherence patterns in healthy participants did not match the expectation of central modulated -rhythm. Instead, coherence involved mainly frontal regions. In healthy participants, the best classification tool was SVM. Five patients had at least one feature-classifier outcome with p0.05 (none of which were coherence or power spectra), though none remained significant after false-discovery rate correction for multiple comparisons. The present work suggests the use of coherences in patients with disorders of consciousness because they show high reliability among healthy subjects and patient groups. However, feature extraction and classification is a challenging task in unresponsive patients because there is no ground truth to validate the results. PMID:24282545

  14. A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs

    NASA Astrophysics Data System (ADS)

    Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.

    2016-12-01

    Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non-control states with the same level of accuracy.

  15. Prevalence of rheumatoid arthritis in persons 60 years of age and older in the United States: effect of different methods of case classification.

    PubMed

    Rasch, Elizabeth K; Hirsch, Rosemarie; Paulose-Ram, Ryne; Hochberg, Marc C

    2003-04-01

    To determine prevalence estimates for rheumatoid arthritis (RA) in noninstitutionalized older adults in the US. Prevalence estimates were compared using 3 different classification methods based on current classification criteria for RA. Data from the Third National Health and Nutrition Examination Survey (NHANES-III) were used to generate prevalence estimates by 3 classification methods in persons 60 years of age and older (n = 5,302). Method 1 applied the "n of k" rule, such that subjects who met 3 of 6 of the American College of Rheumatology (ACR) 1987 criteria were classified as having RA (data from hand radiographs were not available). In method 2, the ACR classification tree algorithm was applied. For method 3, medication data were used to augment case identification via method 2. Population prevalence estimates and 95% confidence intervals (95% CIs) were determined using the 3 methods on data stratified by sex, race/ethnicity, age, and education. Overall prevalence estimates using the 3 classification methods were 2.03% (95% CI 1.30-2.76), 2.15% (95% CI 1.43-2.87), and 2.34% (95% CI 1.66-3.02), respectively. The prevalence of RA was generally greater in the following groups: women, Mexican Americans, respondents with less education, and respondents who were 70 years of age and older. The prevalence of RA in persons 60 years of age and older is approximately 2%, representing the proportion of the US elderly population who will most likely require medical intervention because of disease activity. Different classification methods yielded similar prevalence estimates, although detection of RA was enhanced by incorporation of data on use of prescription medications, an important consideration in large population surveys.

  16. Photometric classification of type Ia supernovae in the SuperNova Legacy Survey with supervised learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Möller, A.; Ruhlmann-Kleider, V.; Leloup, C.

    In the era of large astronomical surveys, photometric classification of supernovae (SNe) has become an important research field due to limited spectroscopic resources for candidate follow-up and classification. In this work, we present a method to photometrically classify type Ia supernovae based on machine learning with redshifts that are derived from the SN light-curves. This method is implemented on real data from the SNLS deferred pipeline, a purely photometric pipeline that identifies SNe Ia at high-redshifts (0.2 < z < 1.1). Our method consists of two stages: feature extraction (obtaining the SN redshift from photometry and estimating light-curve shape parameters)more » and machine learning classification. We study the performance of different algorithms such as Random Forest and Boosted Decision Trees. We evaluate the performance using SN simulations and real data from the first 3 years of the Supernova Legacy Survey (SNLS), which contains large spectroscopically and photometrically classified type Ia samples. Using the Area Under the Curve (AUC) metric, where perfect classification is given by 1, we find that our best-performing classifier (Extreme Gradient Boosting Decision Tree) has an AUC of 0.98.We show that it is possible to obtain a large photometrically selected type Ia SN sample with an estimated contamination of less than 5%. When applied to data from the first three years of SNLS, we obtain 529 events. We investigate the differences between classifying simulated SNe, and real SN survey data. In particular, we find that applying a thorough set of selection cuts to the SN sample is essential for good classification. This work demonstrates for the first time the feasibility of machine learning classification in a high- z SN survey with application to real SN data.« less

  17. On the Implementation of a Land Cover Classification System for SAR Images Using Khoros

    NASA Technical Reports Server (NTRS)

    Medina Revera, Edwin J.; Espinosa, Ramon Vasquez

    1997-01-01

    The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.

  18. Cloud classification from satellite data using a fuzzy sets algorithm: A polar example

    NASA Technical Reports Server (NTRS)

    Key, J. R.; Maslanik, J. A.; Barry, R. G.

    1988-01-01

    Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.

  19. Remote sensing application to regional activities

    NASA Technical Reports Server (NTRS)

    Shahrokhi, F.; Jones, N. L.; Sharber, L. A.

    1976-01-01

    Two agencies within the State of Tennessee were identified whereby the transfer of aerospace technology, namely remote sensing, could be applied to their stated problem areas. Their stated problem areas are wetland and land classification and strip mining studies. In both studies, LANDSAT data was analyzed with the UTSI video-input analog/digital automatic analysis and classification facility. In the West Tennessee area three land-use classifications could be distinguished; cropland, wetland, and forest. In the East Tennessee study area, measurements were submitted to statistical tests which verified the significant differences due to natural terrain, stripped areas, various stages of reclamation, water, etc. Classifications for both studies were output in the form of maps of symbols and varying shades of gray.

  20. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  1. Semi-supervised morphosyntactic classification of Old Icelandic.

    PubMed

    Urban, Kryztof; Tangherlini, Timothy R; Vijūnas, Aurelijus; Broadwell, Peter M

    2014-01-01

    We present IceMorph, a semi-supervised morphosyntactic analyzer of Old Icelandic. In addition to machine-read corpora and dictionaries, it applies a small set of declension prototypes to map corpus words to dictionary entries. A web-based GUI allows expert users to modify and augment data through an online process. A machine learning module incorporates prototype data, edit-distance metrics, and expert feedback to continuously update part-of-speech and morphosyntactic classification. An advantage of the analyzer is its ability to achieve competitive classification accuracy with minimum training data.

  2. Hierarchical brain mapping via a generalized Dirichlet solution for mapping brain manifolds

    NASA Astrophysics Data System (ADS)

    Joshi, Sarang C.; Miller, Michael I.; Christensen, Gary E.; Banerjee, Ayan; Coogan, Tom; Grenander, Ulf

    1995-08-01

    In this paper we present a coarse-to-fine approach for the transformation of digital anatomical textbooks from the ideal to the individual that unifies the work on landmark deformations and volume based transformation. The Hierarchical approach is linked to the Biological problem itself, coming out of the various kinds of information which is provided by the anatomists. This information is in the form of points, lines, surfaces and sub-volumes corresponding to 0, 1, 2, and 3 dimensional sub-manifolds respectively. The algorithm is driven by these sub- manifolds. We follow the approach that the highest dimensional transformation is a result from the solution of a sequence of lower dimensional problems driven by successive refinements or partitions of the images into various Biologically meaningful sub-structures.

  3. Determining heterogeneous slip activity on multiple slip systems from single crystal orientation pole figures

    DOE PAGES

    Pagan, Darren C.; Miller, Matthew P.

    2016-09-01

    A new experimental method to determine heterogeneity of shear strains associated with crystallographic slip in the bulk of ductile, crystalline materials is outlined. The method quantifies the time resolved evolution of misorientation within plastically deforming crystals using single crystal orientation pole figures (SCPFs) measured in-situ with X-ray diffraction. A multiplicative decomposition of the crystal kinematics is used to interpret the distributions of lattice plane orientation observed on the SCPFs in terms of heterogeneous slip activity (shear strains) on multiple slip systems. Here, to show the method’s utility, the evolution of heterogeneous slip is quantified in a silicon single crystal plasticallymore » deformed at high temperature at multiple load steps, with slip activity in sub-volumes of the crystal analyzed simultaneously.« less

  4. Evaluating data mining algorithms using molecular dynamics trajectories.

    PubMed

    Tatsis, Vasileios A; Tjortjis, Christos; Tzirakis, Panagiotis

    2013-01-01

    Molecular dynamics simulations provide a sample of a molecule's conformational space. Experiments on the mus time scale, resulting in large amounts of data, are nowadays routine. Data mining techniques such as classification provide a way to analyse such data. In this work, we evaluate and compare several classification algorithms using three data sets which resulted from computer simulations, of a potential enzyme mimetic biomolecule. We evaluated 65 classifiers available in the well-known data mining toolkit Weka, using 'classification' errors to assess algorithmic performance. Results suggest that: (i) 'meta' classifiers perform better than the other groups, when applied to molecular dynamics data sets; (ii) Random Forest and Rotation Forest are the best classifiers for all three data sets; and (iii) classification via clustering yields the highest classification error. Our findings are consistent with bibliographic evidence, suggesting a 'roadmap' for dealing with such data.

  5. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  6. Neural net applied to anthropological material: a methodical study on the human nasal skeleton.

    PubMed

    Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich

    2005-07-01

    A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.

  7. Assessment of reproductive and developmental effects of DINP, DnHP and DCHP using quantitative weight of evidence.

    PubMed

    Dekant, Wolfgang; Bridges, James

    2016-11-01

    Quantitative weight of evidence (QWoE) methodology utilizes detailed scoring sheets to assess the quality/reliability of each publication on toxicity of a chemical and gives numerical scores for quality and observed toxicity. This QWoE-methodology was applied to the reproductive toxicity data on diisononylphthalate (DINP), di-n-hexylphthalate (DnHP), and dicyclohexylphthalate (DCHP) to determine if the scientific evidence for adverse effects meets the requirements for classification as reproductive toxicants. The scores for DINP were compared to those when applying the methodology DCHP and DnHP that have harmonized classifications. Based on the quality/reliability scores, application of the QWoE shows that the three databases are of similar quality; but effect scores differ widely. Application of QWoE to DINP studies resulted in an overall score well below the benchmark required to trigger classification. For DCHP, the QWoE also results in low scores. The high scores from the application of the QWoE methodology to the toxicological data for DnHP represent clear evidence for adverse effects and justify a classification of DnHP as category 1B for both development and fertility. The conclusions on classification based on the QWoE are well supported using a narrative assessment of consistency and biological plausibility. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  9. InterLymph hierarchical classification of lymphoid neoplasms for epidemiologic research based on the WHO classification (2008): update and future directions

    PubMed Central

    Morton, Lindsay M.; Linet, Martha S.; Clarke, Christina A.; Kadin, Marshall E.; Vajdic, Claire M.; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C.-H.; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R.; Weisenburger, Dennis D.

    2010-01-01

    After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and “in situ” lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications. PMID:20699439

  10. InterLymph hierarchical classification of lymphoid neoplasms for epidemiologic research based on the WHO classification (2008): update and future directions.

    PubMed

    Turner, Jennifer J; Morton, Lindsay M; Linet, Martha S; Clarke, Christina A; Kadin, Marshall E; Vajdic, Claire M; Monnereau, Alain; Maynadié, Marc; Chiu, Brian C-H; Marcos-Gragera, Rafael; Costantini, Adele Seniori; Cerhan, James R; Weisenburger, Dennis D

    2010-11-18

    After publication of the updated World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues in 2008, the Pathology Working Group of the International Lymphoma Epidemiology Consortium (InterLymph) now presents an update of the hierarchical classification of lymphoid neoplasms for epidemiologic research based on the 2001 WHO classification, which we published in 2007. The updated hierarchical classification incorporates all of the major and provisional entities in the 2008 WHO classification, including newly defined entities based on age, site, certain infections, and molecular characteristics, as well as borderline categories, early and "in situ" lesions, disorders with limited capacity for clinical progression, lesions without current International Classification of Diseases for Oncology, 3rd Edition codes, and immunodeficiency-associated lymphoproliferative disorders. WHO subtypes are defined in hierarchical groupings, with newly defined groups for small B-cell lymphomas with plasmacytic differentiation and for primary cutaneous T-cell lymphomas. We suggest approaches for applying the hierarchical classification in various epidemiologic settings, including strategies for dealing with multiple coexisting lymphoma subtypes in one patient, and cases with incomplete pathologic information. The pathology materials useful for state-of-the-art epidemiology studies are also discussed. We encourage epidemiologists to adopt the updated InterLymph hierarchical classification, which incorporates the most recent WHO entities while demonstrating their relationship to older classifications.

  11. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    ERIC Educational Resources Information Center

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  12. Incorporating Multiple-Choice Questions into an AACSB Assurance of Learning Process: A Course-Embedded Assessment Application to an Introductory Finance Course

    ERIC Educational Resources Information Center

    Santos, Michael R.; Hu, Aidong; Jordan, Douglas

    2014-01-01

    The authors offer a classification technique to make a quantitative skills rubric more operational, with the groupings of multiple-choice questions to match the student learning levels in knowledge, calculation, quantitative reasoning, and analysis. The authors applied this classification technique to the mid-term exams of an introductory finance…

  13. A Guide for Setting the Cut-Scores to Minimize Weighted Classification Errors in Test Batteries

    ERIC Educational Resources Information Center

    Grabovsky, Irina; Wainer, Howard

    2017-01-01

    In this article, we extend the methodology of the Cut-Score Operating Function that we introduced previously and apply it to a testing scenario with multiple independent components and different testing policies. We derive analytically the overall classification error rate for a test battery under the policy when several retakes are allowed for…

  14. An Addendum to "A New Tool for Climatic Analysis Using Köppen Climate Classification"

    ERIC Educational Resources Information Center

    Larson, Paul R.; Lohrengel, C. Frederick, II

    2014-01-01

    The Köppen climatic classification system in a modified format is the most widely applied system in use today. Mapping and analysis of hundreds of arid and semiarid climate stations has made the use of the additional fourth letter in BW/BS climates essential. The addition of "s," "w," or "f" to the standard…

  15. Inter-comparison of weather and circulation type classifications for hydrological drought development

    NASA Astrophysics Data System (ADS)

    Fleig, Anne K.; Tallaksen, Lena M.; Hisdal, Hege; Stahl, Kerstin; Hannah, David M.

    Classifications of weather and circulation patterns are often applied in research seeking to relate atmospheric state to surface environmental phenomena. However, numerous procedures have been applied to define the patterns, thus limiting comparability between studies. The COST733 Action “ Harmonisation and Applications of Weather Type Classifications for European regions” tests 73 different weather type classifications (WTC) and their associate weather types (WTs) and compares the WTCs’ utility for various applications. The objective of this study is to evaluate the potential of these WTCs for analysis of regional hydrological drought development in north-western Europe. Hydrological drought is defined in terms of a Regional Drought Area Index (RDAI), which is based on deficits derived from daily river flow series. RDAI series (1964-2001) were calculated for four homogeneous regions in Great Britain and two in Denmark. For each region, WTs associated with hydrological drought development were identified based on antecedent and concurrent WT-frequencies for major drought events. The utility of the different WTCs for the study of hydrological drought development was evaluated, and the influence of WTC attributes, i.e. input variables, number of defined WTs and general classification concept, on WTC performance was assessed. The objective Grosswetterlagen (OGWL), the objective Second-Generation Lamb Weather Type Classification (LWT2) with 18 WTs and two implementations of the objective Wetterlagenklassifikation (WLK; with 40 and 28 WTs) outperformed all other WTCs. In general, WTCs with more WTs (⩾27) were found to perform better than WTCs with less (⩽18) WTs. The influence of input variables was not consistent across the different classification procedures, and the performance of a WTC was determined primarily by the classification procedure itself. Overall, classification procedures following the relatively simple general classification concept of predefining WTs based on thresholds, performed better than those based on more sophisticated classification concepts such as deriving WTs by cluster analysis or artificial neural networks. In particular, PCA based WTCs with 9 WTs and automated WTCs with a high number of predefined WTs (subjectively and threshold based) performed well. It is suggested that the explicit consideration of the air flow characteristics of meridionality, zonality and cyclonicity in the definition of WTs is a useful feature for a WTC when analysing regional hydrological drought development.

  16. Support vector machine based classification of fast Fourier transform spectroscopy of proteins

    NASA Astrophysics Data System (ADS)

    Lazarevic, Aleksandar; Pokrajac, Dragoljub; Marcano, Aristides; Melikechi, Noureddine

    2009-02-01

    Fast Fourier transform spectroscopy has proved to be a powerful method for study of the secondary structure of proteins since peak positions and their relative amplitude are affected by the number of hydrogen bridges that sustain this secondary structure. However, to our best knowledge, the method has not been used yet for identification of proteins within a complex matrix like a blood sample. The principal reason is the apparent similarity of protein infrared spectra with actual differences usually masked by the solvent contribution and other interactions. In this paper, we propose a novel machine learning based method that uses protein spectra for classification and identification of such proteins within a given sample. The proposed method uses principal component analysis (PCA) to identify most important linear combinations of original spectral components and then employs support vector machine (SVM) classification model applied on such identified combinations to categorize proteins into one of given groups. Our experiments have been performed on the set of four different proteins, namely: Bovine Serum Albumin, Leptin, Insulin-like Growth Factor 2 and Osteopontin. Our proposed method of applying principal component analysis along with support vector machines exhibits excellent classification accuracy when identifying proteins using their infrared spectra.

  17. Supervised classification of continental shelf sediment off western Donegal, Ireland

    NASA Astrophysics Data System (ADS)

    Monteys, X.; Craven, K.; McCarron, S. G.

    2017-12-01

    Managing human impacts on marine ecosystems requires natural regions to be identified and mapped over a range of hierarchically nested scales. In recent years (2000-present) the Irish National Seabed Survey (INSS) and Integrated Mapping for the Sustainable Development of Ireland's Marine Resources programme (INFOMAR) (Geological Survey Ireland and Marine Institute collaborations) has provided unprecedented quantities of high quality data on Ireland's offshore territories. The increasing availability of large, detailed digital representations of these environments requires the application of objective and quantitative analyses. This study presents results of a new approach for sea floor sediment mapping based on an integrated analysis of INFOMAR multibeam bathymetric data (including the derivatives of slope and relative position), backscatter data (including derivatives of angular response analysis) and sediment groundtruthing over the continental shelf, west of Donegal. It applies a Geographic-Object-Based Image Analysis software package to provide a supervised classification of the surface sediment. This approach can provide a statistically robust, high resolution classification of the seafloor. Initial results display a differentiation of sediment classes and a reduction in artefacts from previously applied methodologies. These results indicate a methodology that could be used during physical habitat mapping and classification of marine environments.

  18. Vehicle detection in aerial surveillance using dynamic Bayesian networks.

    PubMed

    Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying

    2012-04-01

    We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.

  19. Ecosystem services provided by a complex coastal region: challenges of classification and mapping.

    PubMed

    Sousa, Lisa P; Sousa, Ana I; Alves, Fátima L; Lillebø, Ana I

    2016-03-11

    A variety of ecosystem services classification systems and mapping approaches are available in the scientific and technical literature, which needs to be selected and adapted when applied to complex territories (e.g. in the interface between water and land, estuary and sea). This paper provides a framework for addressing ecosystem services in complex coastal regions. The roadmap comprises the definition of the exact geographic boundaries of the study area; the use of CICES (Common International Classification of Ecosystem Services) for ecosystem services identification and classification; and the definition of qualitative indicators that will serve as basis to map the ecosystem services. Due to its complexity, the Ria de Aveiro coastal region was selected as case study, presenting an opportunity to explore the application of such approaches at a regional scale. The main challenges of implementing the proposed roadmap, together with its advantages are discussed in this research. The results highlight the importance of considering both the connectivity of natural systems and the complexity of the governance framework; the flexibility and robustness, but also the challenges when applying CICES at regional scale; and the challenges regarding ecosystem services mapping.

  20. Classification of plum spirit drinks by synchronous fluorescence spectroscopy.

    PubMed

    Sádecká, J; Jakubíková, M; Májek, P; Kleinová, A

    2016-04-01

    Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Ecosystem services provided by a complex coastal region: challenges of classification and mapping

    PubMed Central

    Sousa, Lisa P.; Sousa, Ana I.; Alves, Fátima L.; Lillebø, Ana I.

    2016-01-01

    A variety of ecosystem services classification systems and mapping approaches are available in the scientific and technical literature, which needs to be selected and adapted when applied to complex territories (e.g. in the interface between water and land, estuary and sea). This paper provides a framework for addressing ecosystem services in complex coastal regions. The roadmap comprises the definition of the exact geographic boundaries of the study area; the use of CICES (Common International Classification of Ecosystem Services) for ecosystem services identification and classification; and the definition of qualitative indicators that will serve as basis to map the ecosystem services. Due to its complexity, the Ria de Aveiro coastal region was selected as case study, presenting an opportunity to explore the application of such approaches at a regional scale. The main challenges of implementing the proposed roadmap, together with its advantages are discussed in this research. The results highlight the importance of considering both the connectivity of natural systems and the complexity of the governance framework; the flexibility and robustness, but also the challenges when applying CICES at regional scale; and the challenges regarding ecosystem services mapping. PMID:26964892

  2. Movement imagery classification in EMOTIV cap based system by Naïve Bayes.

    PubMed

    Stock, Vinicius N; Balbinot, Alexandre

    2016-08-01

    Brain-computer interfaces (BCI) provide means of communications and control, in assistive technology, which do not require motor activity from the user. The goal of this study is to promote classification of two types of imaginary movements, left and right hands, in an EMOTIV cap based system, using the Naïve Bayes classifier. A preliminary analysis with respect to results obtained by other experiments in this field is also conducted. Processing of the electroencephalography (EEG) signals is done applying Common Spatial Pattern filters. The EPOC electrodes cap is used for EEG acquisition, in two test subjects, for two distinct trial formats. The channels picked are FC5, FC6, P7 and P8 of the 10-20 system, and a discussion about the differences of using C3, C4, P3 and P4 positions is proposed. Dataset 3 of the BCI Competition II is also analyzed using the implemented algorithms. The maximum classification results for the proposed experiment and for the BCI Competition dataset were, respectively, 79% and 85% The conclusion of this study is that the picked positions for electrodes may be applied for BCI systems with satisfactory classification rates.

  3. Intraosseous haemangioma: semantic and medical confusion.

    PubMed

    Kadlub, N; Dainese, L; Coulomb-L'Hermine, A; Galmiche, L; Soupre, V; Lepointe, H Ducou; Vazquez, M-P; Picard, A

    2015-06-01

    The literature is rich in case reports of intraosseous haemangioma, although most of these are actually cases of venous or capillary malformations. To illustrate this confusion in terminology, we present three cases of slow-flow vascular malformations misnamed as intraosseous haemangioma. A retrospective study of children diagnosed with intraosseous haemangioma was conducted. Clinical and radiological data were evaluated. Histopathological examinations and immunohistochemical studies were redone by three independent pathologists to classify the lesions according to the International Society for the Study of Vascular Anomalies (ISSVA) and World Health Organization (WHO) classifications. Three children who had presented with jaw haemangiomas were identified. Computed tomography scan patterns were not specific. All tumours were GLUT-1-negative and D2-40-negative. The lesions were classified as central haemangiomas according to the WHO, and as slow-flow malformations according to the ISSVA. The classification of vascular anomalies is based on clinical, radiological, and histological differences between vascular tumours and malformations. Based on this classification, the evolution of the lesion can be predicted and adequate treatment applied. The binary ISSVA classification is widely accepted and should be applied for all vascular lesions. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Meeting the criteria of a nursing diagnosis classification: Evaluation of ICNP, ICF, NANDA and ZEFP.

    PubMed

    Müller-Staub, Maria; Lavin, Mary Ann; Needham, Ian; van Achterberg, Theo

    2007-07-01

    Few studies described nursing diagnosis classification criteria and how classifications meet these criteria. The purpose was to identify criteria for nursing diagnosis classifications and to assess how these criteria are met by different classifications. First, a literature review was conducted (N=50) to identify criteria for nursing diagnoses classifications and to evaluate how these criteria are met by the International Classification of Nursing Practice (ICNP), the International Classification of Functioning, Disability and Health (ICF), the International Nursing Diagnoses Classification (NANDA), and the Nursing Diagnostic System of the Centre for Nursing Development and Research (ZEFP). Using literature review based general and specific criteria, the principal investigator evaluated each classification, applying a matrix. Second, a convenience sample of 20 nursing experts from different Swiss care institutions answered standardized interview forms, querying current national and international classification state and use. The first general criterion is that a diagnosis classification should describe the knowledge base and subject matter for which the nursing profession is responsible. ICNP) and NANDA meet this goal. The second general criterion is that each class fits within a central concept. The ICF and NANDA are the only two classifications built on conceptually driven classes. The third general classification criterion is that each diagnosis possesses a description, diagnostic criteria, and related etiologies. Although ICF and ICNP describe diagnostic terms, only NANDA fulfils this criterion. The analysis indicated that NANDA fulfilled most of the specific classification criteria in the matrix. The nursing experts considered NANDA to be the best-researched and most widely implemented classification in Switzerland and internationally. The international literature and the opinion of Swiss expert nurses indicate that-from the perspective of classifying comprehensive nursing diagnoses-NANDA should be recommended for nursing practice and electronic nursing documentation. Study limitations and future research needs are discussed.

  5. On the difficulty to delimit disease risk hot spots

    NASA Astrophysics Data System (ADS)

    Charras-Garrido, M.; Azizi, L.; Forbes, F.; Doyle, S.; Peyrard, N.; Abrial, D.

    2013-06-01

    Representing the health state of a region is a helpful tool to highlight spatial heterogeneity and localize high risk areas. For ease of interpretation and to determine where to apply control procedures, we need to clearly identify and delineate homogeneous regions in terms of disease risk, and in particular disease risk hot spots. However, even if practical purposes require the delineation of different risk classes, such a classification does not correspond to a reality and is thus difficult to estimate. Working with grouped data, a first natural choice is to apply disease mapping models. We apply a usual disease mapping model, producing continuous estimations of the risks that requires a post-processing classification step to obtain clearly delimited risk zones. We also apply a risk partition model that build a classification of the risk levels in a one step procedure. Working with point data, we will focus on the scan statistic clustering method. We illustrate our article with a real example concerning the bovin spongiform encephalopathy (BSE) an animal disease whose zones at risk are well known by the epidemiologists. We show that in this difficult case of a rare disease and a very heterogeneous population, the different methods provide risk zones that are globally coherent. But, related to the dichotomy between the need and the reality, the exact delimitation of the risk zones, as well as the corresponding estimated risks are quite different.

  6. The novel WHO 2010 classification for gastrointestinal neuroendocrine tumours correlates well with the metastatic potential of rectal neuroendocrine tumours.

    PubMed

    Jernman, Juha; Välimäki, Matti J; Louhimo, Johanna; Haglund, Caj; Arola, Johanna

    2012-01-01

    Approximately 10-15% of gastroenteropancreatic neuroendocrine tumours (NETs, carcinoids) occur in the rectum, some of which are potentially able to metastasize. The new WHO 2010 classification of NETs applies to all gastroenteropancreatic NETs, but no reports have studied its correlation with the prognosis of rectal NETs. We retrospectively classified 73 rectal NETs according to the novel WHO 2010 and the previous WHO 2000 classifications. The aim was to assess the validity of the classifications in distinguishing indolent rectal NETs from metastasising tumours. Using the WHO 2010 criteria, we identified 61 G1 tumours, none of which had metastasised during follow-up. Of 11 G2 tumours, 9 had shown distant metastases. The only G3 neuroendocrine carcinoma that occurred had been disseminated at initial presentation. Our results show that rectal NETs classified as G1 according to the WHO 2010 classification have an indolent clinical course, whereas G2 NETs often metastasise. The WHO 2010 classification of NETs predicts the metastatic potential of rectal NETs better than the WHO 2000 classification. Copyright © 2011 S. Karger AG, Basel.

  7. Supernova Photometric Lightcurve Classification

    NASA Astrophysics Data System (ADS)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  8. Chinese Sentence Classification Based on Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  9. a Novel Framework for Remote Sensing Image Scene Classification

    NASA Astrophysics Data System (ADS)

    Jiang, S.; Zhao, H.; Wu, W.; Tan, Q.

    2018-04-01

    High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.

  10. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  11. Abstracting of suspected illegal land use in urban areas using case-based classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Fulong; Wang, Chao; Yang, Chengyun; Zhang, Hong; Wu, Fan; Lin, Wenjuan; Zhang, Bo

    2008-11-01

    This paper proposed a method that uses a case-based classification of remote sensing images and applied this method to abstract the information of suspected illegal land use in urban areas. Because of the discrete cases for imagery classification, the proposed method dealt with the oscillation of spectrum or backscatter within the same land use category, and it not only overcame the deficiency of maximum likelihood classification (the prior probability of land use could not be obtained) but also inherited the advantages of the knowledge-based classification system, such as artificial intelligence and automatic characteristics. Consequently, the proposed method could do the classifying better. Then the researchers used the object-oriented technique for shadow removal in highly dense city zones. With multi-temporal SPOT 5 images whose resolution was 2.5×2.5 meters, the researchers found that the method can abstract suspected illegal land use information in urban areas using post-classification comparison technique.

  12. How a national vegetation classification can help ecological research and management

    USGS Publications Warehouse

    Franklin, Scott; Comer, Patrick; Evens, Julie; Ezcurra, Exequiel; Faber-Langendoen, Don; Franklin, Janet; Jennings, Michael; Josse, Carmen; Lea, Chris; Loucks, Orie; Muldavin, Esteban; Peet, Robert K.; Ponomarenko, Serguei; Roberts, David G.; Solomeshch, Ayzik; Keeler-Wolf, Todd; Van Kley, James; Weakley, Alan; McKerrow, Alexa; Burke, Marianne; Spurrier, Carol

    2015-01-01

    The elegance of classification lies in its ability to compile and systematize various terminological conventions and masses of information that are unattainable during typical research projects. Imagine a discipline without standards for collection, analysis, and interpretation; unfortunately, that describes much of 20th-century vegetation ecology. With differing methods, how do we assess community dynamics over decades, much less centuries? How do we compare plant communities from different areas? The need for a widely applied vegetation classification has long been clear. Now imagine a multi-decade effort to assimilate hundreds of disparate vegetation classifications into one common classification for the US. In this letter, we introduce the US National Vegetation Classification (USNVC; www.usnvc.org) as a powerful tool for research and conservation, analogous to the argument made by Schimel and Chadwick (2013) for soils. The USNVC provides a national framework to classify and describe vegetation; here we describe the USNVC and offer brief examples of its efficacy.

  13. Neural Network Classification of Receiver Functions as a Step Towards Automatic Crustal Parameter Determination

    NASA Astrophysics Data System (ADS)

    Jemberie, A.; Dugda, M. T.; Reusch, D.; Nyblade, A.

    2006-12-01

    Neural networks are decision making mathematical/engineering tools, which if trained properly, can do jobs automatically (and objectively) that normally require particular expertise and/or tedious repetition. Here we explore two techniques from the field of artificial neural networks (ANNs) that seek to reduce the time requirements and increase the objectivity of quality control (QC) and Event Identification (EI) on seismic datasets. We explore to apply the multiplayer Feed Forward (FF) Artificial Neural Networks (ANN) and Self- Organizing Maps (SOM) in combination with Hk stacking of receiver functions in an attempt to test the extent of the usefulness of automatic classification of receiver functions for crustal parameter determination. Feed- forward ANNs (FFNNs) are a supervised classification tool while self-organizing maps (SOMs) are able to provide unsupervised classification of large, complex geophysical data sets into a fixed number of distinct generalized patterns or modes. Hk stacking is a methodology that is used to stack receiver functions based on the relative arrival times of P-to-S converted phase and next two reverberations to determine crustal thickness H and Vp-to-Vs ratio (k). We use receiver functions from teleseismic events recorded by the 2000- 2002 Ethiopia Broadband Seismic Experiment. Preliminary results of applying FFNN neural network and Hk stacking of receiver functions for automatic receiver functions classification as a step towards an effort of automatic crustal parameter determination look encouraging. After training a FFNN neural network, the network could classify the best receiver functions from bad ones with a success rate of about 75 to 95%. Applying H? stacking on the receiver functions classified by this FFNN as the best receiver functions, we could obtain crustal thickness and Vp/Vs ratio of 31±4 km and 1.75±0.05, respectively, for the crust beneath station ARBA in the Main Ethiopian Rift. To make comparison, we applied Hk stacking on the receiver functions which we ourselves classified as the best set and found that the crustal thickness and Vp/Vs ratio are 31±2 km and 1.75±0.02, respectively.

  14. Social Sciences in Forestry, A Current Selected Bibliography, No. 57. Special Appendix: Theses and Dissertations in Progress.

    ERIC Educational Resources Information Center

    Schwab, Judith L., Ed.

    1982-01-01

    Documents which address the interface between forestry and the social sciences comprise this annotated bibliography. A subject-matter classification scheme is used to group publications by subheadings under five major heading: (1) social science applied to forestry at large; (2) applied to forestry's productive agents; (3) applied to forest…

  15. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    DTIC Science & Technology

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  16. Molecular classification of pesticides including persistent organic pollutants, phenylurea and sulphonylurea herbicides.

    PubMed

    Torrens, Francisco; Castellano, Gloria

    2014-06-05

    Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions.

  17. Link prediction boosted psychiatry disorder classification for functional connectivity network

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  18. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.

  19. Towards a robust framework for catchment classification

    NASA Astrophysics Data System (ADS)

    Deshmukh, A.; Samal, A.; Singh, R.

    2017-12-01

    Classification of catchments based on various measures of similarity has emerged as an important technique to understand regional scale hydrologic behavior. Classification of catchment characteristics and/or streamflow response has been used reveal which characteristics are more likely to explain the observed variability of hydrologic response. However, numerous algorithms for supervised or unsupervised classification are available, making it hard to identify the algorithm most suitable for the dataset at hand. Consequently, existing catchment classification studies vary significantly in the classification algorithms employed with no previous attempt at understanding the degree of uncertainty in classification due to this algorithmic choice. This hinders the generalizability of interpretations related to hydrologic behavior. Our goal is to develop a protocol that can be followed while classifying hydrologic datasets. We focus on a classification framework for unsupervised classification and provide a step-by-step classification procedure. The steps include testing the clusterabiltiy of original dataset prior to classification, feature selection, validation of clustered data, and quantification of similarity of two clusterings. We test several commonly available methods within this framework to understand the level of similarity of classification results across algorithms. We apply the proposed framework on recently developed datasets for India to analyze to what extent catchment properties can explain observed catchment response. Our testing dataset includes watershed characteristics for over 200 watersheds which comprise of both natural (physio-climatic) characteristics and socio-economic characteristics. This framework allows us to understand the controls on observed hydrologic variability across India.

  20. A comparison of intensity modulated x-ray therapy to intensity modulated proton therapy for the delivery of non-uniform dose distributions

    NASA Astrophysics Data System (ADS)

    Flynn, Ryan

    2007-12-01

    The distribution of biological characteristics such as clonogen density, proliferation, and hypoxia throughout tumors is generally non-uniform, therefore it follows that the optimal dose prescriptions should also be non-uniform and tumor-specific. Advances in intensity modulated x-ray therapy (IMXT) technology have made the delivery of custom-made non-uniform dose distributions possible in practice. Intensity modulated proton therapy (IMPT) has the potential to deliver non-uniform dose distributions as well, while significantly reducing normal tissue and organ at risk dose relative to IMXT. In this work, a specialized treatment planning system was developed for the purpose of optimizing and comparing biologically based IMXT and IMPT plans. The IMXT systems of step-and-shoot (IMXT-SAS) and helical tomotherapy (IMXT-HT) and the IMPT systems of intensity modulated spot scanning (IMPT-SS) and distal gradient tracking (IMPT-DGT), were simulated. A thorough phantom study was conducted in which several subvolumes, which were contained within a base tumor region, were boosted or avoided with IMXT and IMPT. Different boosting situations were simulated by varying the size, proximity, and the doses prescribed to the subvolumes, and the size of the phantom. IMXT and IMPT were also compared for a whole brain radiation therapy (WBRT) case, in which a brain metastasis was simultaneously boosted and the hippocampus was avoided. Finally, IMXT and IMPT dose distributions were compared for the case of non-uniform dose prescription in a head and neck cancer patient that was based on PET imaging with the Cu(II)-diacetyl-bis(N4-methylthiosemicarbazone (Cu-ATSM) hypoxia marker. The non-uniform dose distributions within the tumor region were comparable for IMXT and IMPT. IMPT, however, was capable of delivering the same non-uniform dose distributions within a tumor using a 180° arc as for a full 360° rotation, which resulted in the reduction of normal tissue integral dose by a factor of up to three relative to IMXT, and the complete sparing of organs at risk distal to the tumor region.

Top